00:00:00.002 Started by upstream project "autotest-nightly" build number 4125 00:00:00.002 originally caused by: 00:00:00.003 Started by upstream project "nightly-trigger" build number 3487 00:00:00.003 originally caused by: 00:00:00.003 Started by timer 00:00:00.003 Started by timer 00:00:00.184 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.184 The recommended git tool is: git 00:00:00.184 using credential 00000000-0000-0000-0000-000000000002 00:00:00.186 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.235 Fetching changes from the remote Git repository 00:00:00.237 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.279 Using shallow fetch with depth 1 00:00:00.279 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.279 > git --version # timeout=10 00:00:00.325 > git --version # 'git version 2.39.2' 00:00:00.325 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.349 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.349 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.919 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.928 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.938 Checking out Revision 7510e71a2b3ec6fca98e4ec196065590f900d444 (FETCH_HEAD) 00:00:09.939 > git config core.sparsecheckout # timeout=10 00:00:09.948 > git read-tree -mu HEAD # timeout=10 00:00:09.963 > git checkout -f 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=5 00:00:09.980 Commit message: "kid: add issue 3541" 00:00:09.981 > git rev-list --no-walk 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=10 00:00:10.061 [Pipeline] Start of Pipeline 00:00:10.074 [Pipeline] library 00:00:10.075 Loading library shm_lib@master 00:00:10.075 Library shm_lib@master is cached. Copying from home. 00:00:10.090 [Pipeline] node 00:00:10.101 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:10.103 [Pipeline] { 00:00:10.113 [Pipeline] catchError 00:00:10.114 [Pipeline] { 00:00:10.126 [Pipeline] wrap 00:00:10.134 [Pipeline] { 00:00:10.140 [Pipeline] stage 00:00:10.142 [Pipeline] { (Prologue) 00:00:10.155 [Pipeline] echo 00:00:10.157 Node: VM-host-SM9 00:00:10.161 [Pipeline] cleanWs 00:00:10.169 [WS-CLEANUP] Deleting project workspace... 00:00:10.169 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.175 [WS-CLEANUP] done 00:00:10.365 [Pipeline] setCustomBuildProperty 00:00:10.434 [Pipeline] httpRequest 00:00:10.801 [Pipeline] echo 00:00:10.802 Sorcerer 10.211.164.101 is alive 00:00:10.810 [Pipeline] retry 00:00:10.812 [Pipeline] { 00:00:10.823 [Pipeline] httpRequest 00:00:10.827 HttpMethod: GET 00:00:10.827 URL: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:10.828 Sending request to url: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:10.844 Response Code: HTTP/1.1 200 OK 00:00:10.844 Success: Status code 200 is in the accepted range: 200,404 00:00:10.845 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:31.526 [Pipeline] } 00:00:31.543 [Pipeline] // retry 00:00:31.551 [Pipeline] sh 00:00:31.832 + tar --no-same-owner -xf jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:31.847 [Pipeline] httpRequest 00:00:32.632 [Pipeline] echo 00:00:32.633 Sorcerer 10.211.164.101 is alive 00:00:32.642 [Pipeline] retry 00:00:32.643 [Pipeline] { 00:00:32.657 [Pipeline] httpRequest 00:00:32.661 HttpMethod: GET 00:00:32.661 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:32.662 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:32.662 Response Code: HTTP/1.1 200 OK 00:00:32.663 Success: Status code 200 is in the accepted range: 200,404 00:00:32.663 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:48.342 [Pipeline] } 00:00:48.363 [Pipeline] // retry 00:00:48.371 [Pipeline] sh 00:00:48.655 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:51.204 [Pipeline] sh 00:00:51.486 + git -C spdk log --oneline -n5 00:00:51.486 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:00:51.486 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:00:51.486 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:00:51.486 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:00:51.486 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:00:51.504 [Pipeline] writeFile 00:00:51.519 [Pipeline] sh 00:00:51.803 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:51.814 [Pipeline] sh 00:00:52.094 + cat autorun-spdk.conf 00:00:52.094 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.094 SPDK_TEST_NVMF=1 00:00:52.094 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.094 SPDK_TEST_URING=1 00:00:52.094 SPDK_TEST_VFIOUSER=1 00:00:52.094 SPDK_TEST_USDT=1 00:00:52.094 SPDK_RUN_ASAN=1 00:00:52.094 SPDK_RUN_UBSAN=1 00:00:52.094 NET_TYPE=virt 00:00:52.094 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:52.100 RUN_NIGHTLY=1 00:00:52.102 [Pipeline] } 00:00:52.116 [Pipeline] // stage 00:00:52.130 [Pipeline] stage 00:00:52.132 [Pipeline] { (Run VM) 00:00:52.144 [Pipeline] sh 00:00:52.423 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:52.423 + echo 'Start stage prepare_nvme.sh' 00:00:52.423 Start stage prepare_nvme.sh 00:00:52.423 + [[ -n 1 ]] 00:00:52.423 + disk_prefix=ex1 00:00:52.423 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:52.423 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:52.423 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:52.423 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.423 ++ SPDK_TEST_NVMF=1 00:00:52.423 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.423 ++ SPDK_TEST_URING=1 00:00:52.423 ++ SPDK_TEST_VFIOUSER=1 00:00:52.423 ++ SPDK_TEST_USDT=1 00:00:52.423 ++ SPDK_RUN_ASAN=1 00:00:52.423 ++ SPDK_RUN_UBSAN=1 00:00:52.423 ++ NET_TYPE=virt 00:00:52.423 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:52.423 ++ RUN_NIGHTLY=1 00:00:52.423 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:52.423 + nvme_files=() 00:00:52.423 + declare -A nvme_files 00:00:52.423 + backend_dir=/var/lib/libvirt/images/backends 00:00:52.423 + nvme_files['nvme.img']=5G 00:00:52.423 + nvme_files['nvme-cmb.img']=5G 00:00:52.423 + nvme_files['nvme-multi0.img']=4G 00:00:52.423 + nvme_files['nvme-multi1.img']=4G 00:00:52.423 + nvme_files['nvme-multi2.img']=4G 00:00:52.423 + nvme_files['nvme-openstack.img']=8G 00:00:52.423 + nvme_files['nvme-zns.img']=5G 00:00:52.423 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:52.423 + (( SPDK_TEST_FTL == 1 )) 00:00:52.423 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:52.423 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:52.423 + for nvme in "${!nvme_files[@]}" 00:00:52.423 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:52.423 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:52.424 + for nvme in "${!nvme_files[@]}" 00:00:52.424 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:52.682 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:52.682 + for nvme in "${!nvme_files[@]}" 00:00:52.682 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:52.682 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:52.682 + for nvme in "${!nvme_files[@]}" 00:00:52.682 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:52.941 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:52.941 + for nvme in "${!nvme_files[@]}" 00:00:52.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:53.199 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.199 + for nvme in "${!nvme_files[@]}" 00:00:53.199 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:53.458 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.458 + for nvme in "${!nvme_files[@]}" 00:00:53.458 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:53.458 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.458 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:53.717 + echo 'End stage prepare_nvme.sh' 00:00:53.717 End stage prepare_nvme.sh 00:00:53.728 [Pipeline] sh 00:00:54.008 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:54.008 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:00:54.008 00:00:54.008 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:54.008 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:54.008 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:54.008 HELP=0 00:00:54.008 DRY_RUN=0 00:00:54.008 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:00:54.008 NVME_DISKS_TYPE=nvme,nvme, 00:00:54.008 NVME_AUTO_CREATE=0 00:00:54.008 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:00:54.008 NVME_CMB=,, 00:00:54.008 NVME_PMR=,, 00:00:54.008 NVME_ZNS=,, 00:00:54.008 NVME_MS=,, 00:00:54.008 NVME_FDP=,, 00:00:54.008 SPDK_VAGRANT_DISTRO=fedora39 00:00:54.008 SPDK_VAGRANT_VMCPU=10 00:00:54.008 SPDK_VAGRANT_VMRAM=12288 00:00:54.008 SPDK_VAGRANT_PROVIDER=libvirt 00:00:54.008 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:54.008 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:54.008 SPDK_OPENSTACK_NETWORK=0 00:00:54.009 VAGRANT_PACKAGE_BOX=0 00:00:54.009 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:54.009 FORCE_DISTRO=true 00:00:54.009 VAGRANT_BOX_VERSION= 00:00:54.009 EXTRA_VAGRANTFILES= 00:00:54.009 NIC_MODEL=e1000 00:00:54.009 00:00:54.009 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:54.009 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:57.307 Bringing machine 'default' up with 'libvirt' provider... 00:00:57.307 ==> default: Creating image (snapshot of base box volume). 00:00:57.566 ==> default: Creating domain with the following settings... 00:00:57.566 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727486033_0ffe018e58f4ccc90c7f 00:00:57.566 ==> default: -- Domain type: kvm 00:00:57.566 ==> default: -- Cpus: 10 00:00:57.566 ==> default: -- Feature: acpi 00:00:57.566 ==> default: -- Feature: apic 00:00:57.566 ==> default: -- Feature: pae 00:00:57.566 ==> default: -- Memory: 12288M 00:00:57.566 ==> default: -- Memory Backing: hugepages: 00:00:57.566 ==> default: -- Management MAC: 00:00:57.566 ==> default: -- Loader: 00:00:57.566 ==> default: -- Nvram: 00:00:57.566 ==> default: -- Base box: spdk/fedora39 00:00:57.566 ==> default: -- Storage pool: default 00:00:57.566 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727486033_0ffe018e58f4ccc90c7f.img (20G) 00:00:57.566 ==> default: -- Volume Cache: default 00:00:57.566 ==> default: -- Kernel: 00:00:57.566 ==> default: -- Initrd: 00:00:57.566 ==> default: -- Graphics Type: vnc 00:00:57.566 ==> default: -- Graphics Port: -1 00:00:57.566 ==> default: -- Graphics IP: 127.0.0.1 00:00:57.566 ==> default: -- Graphics Password: Not defined 00:00:57.566 ==> default: -- Video Type: cirrus 00:00:57.566 ==> default: -- Video VRAM: 9216 00:00:57.566 ==> default: -- Sound Type: 00:00:57.566 ==> default: -- Keymap: en-us 00:00:57.566 ==> default: -- TPM Path: 00:00:57.566 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:57.566 ==> default: -- Command line args: 00:00:57.566 ==> default: -> value=-device, 00:00:57.566 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:57.566 ==> default: -> value=-drive, 00:00:57.566 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:57.566 ==> default: -> value=-device, 00:00:57.566 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.566 ==> default: -> value=-device, 00:00:57.566 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:57.566 ==> default: -> value=-drive, 00:00:57.566 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:57.566 ==> default: -> value=-device, 00:00:57.566 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.566 ==> default: -> value=-drive, 00:00:57.566 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:57.566 ==> default: -> value=-device, 00:00:57.566 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.566 ==> default: -> value=-drive, 00:00:57.566 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:57.566 ==> default: -> value=-device, 00:00:57.566 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.566 ==> default: Creating shared folders metadata... 00:00:57.566 ==> default: Starting domain. 00:00:58.946 ==> default: Waiting for domain to get an IP address... 00:01:17.036 ==> default: Waiting for SSH to become available... 00:01:17.036 ==> default: Configuring and enabling network interfaces... 00:01:19.569 default: SSH address: 192.168.121.224:22 00:01:19.570 default: SSH username: vagrant 00:01:19.570 default: SSH auth method: private key 00:01:22.104 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:30.218 ==> default: Mounting SSHFS shared folder... 00:01:30.786 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:30.786 ==> default: Checking Mount.. 00:01:32.187 ==> default: Folder Successfully Mounted! 00:01:32.187 ==> default: Running provisioner: file... 00:01:32.755 default: ~/.gitconfig => .gitconfig 00:01:33.323 00:01:33.323 SUCCESS! 00:01:33.323 00:01:33.323 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:33.323 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:33.323 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:33.323 00:01:33.332 [Pipeline] } 00:01:33.347 [Pipeline] // stage 00:01:33.356 [Pipeline] dir 00:01:33.357 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:33.359 [Pipeline] { 00:01:33.371 [Pipeline] catchError 00:01:33.373 [Pipeline] { 00:01:33.386 [Pipeline] sh 00:01:33.665 + vagrant ssh-config --host vagrant 00:01:33.666 + sed -ne /^Host/,$p 00:01:33.666 + tee ssh_conf 00:01:36.947 Host vagrant 00:01:36.947 HostName 192.168.121.224 00:01:36.947 User vagrant 00:01:36.948 Port 22 00:01:36.948 UserKnownHostsFile /dev/null 00:01:36.948 StrictHostKeyChecking no 00:01:36.948 PasswordAuthentication no 00:01:36.948 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:36.948 IdentitiesOnly yes 00:01:36.948 LogLevel FATAL 00:01:36.948 ForwardAgent yes 00:01:36.948 ForwardX11 yes 00:01:36.948 00:01:36.961 [Pipeline] withEnv 00:01:36.963 [Pipeline] { 00:01:36.975 [Pipeline] sh 00:01:37.254 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:37.254 source /etc/os-release 00:01:37.254 [[ -e /image.version ]] && img=$(< /image.version) 00:01:37.254 # Minimal, systemd-like check. 00:01:37.254 if [[ -e /.dockerenv ]]; then 00:01:37.254 # Clear garbage from the node's name: 00:01:37.254 # agt-er_autotest_547-896 -> autotest_547-896 00:01:37.254 # $HOSTNAME is the actual container id 00:01:37.254 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:37.254 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:37.254 # We can assume this is a mount from a host where container is running, 00:01:37.254 # so fetch its hostname to easily identify the target swarm worker. 00:01:37.254 container="$(< /etc/hostname) ($agent)" 00:01:37.254 else 00:01:37.254 # Fallback 00:01:37.254 container=$agent 00:01:37.254 fi 00:01:37.254 fi 00:01:37.254 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:37.254 00:01:37.523 [Pipeline] } 00:01:37.541 [Pipeline] // withEnv 00:01:37.551 [Pipeline] setCustomBuildProperty 00:01:37.567 [Pipeline] stage 00:01:37.569 [Pipeline] { (Tests) 00:01:37.585 [Pipeline] sh 00:01:37.863 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:38.133 [Pipeline] sh 00:01:38.412 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:38.682 [Pipeline] timeout 00:01:38.683 Timeout set to expire in 1 hr 0 min 00:01:38.685 [Pipeline] { 00:01:38.698 [Pipeline] sh 00:01:38.976 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:39.541 HEAD is now at 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:39.552 [Pipeline] sh 00:01:39.829 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:40.109 [Pipeline] sh 00:01:40.382 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:40.723 [Pipeline] sh 00:01:41.003 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:41.003 ++ readlink -f spdk_repo 00:01:41.262 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:41.262 + [[ -n /home/vagrant/spdk_repo ]] 00:01:41.262 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:41.262 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:41.262 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:41.262 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:41.262 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:41.262 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:41.262 + cd /home/vagrant/spdk_repo 00:01:41.262 + source /etc/os-release 00:01:41.262 ++ NAME='Fedora Linux' 00:01:41.262 ++ VERSION='39 (Cloud Edition)' 00:01:41.262 ++ ID=fedora 00:01:41.262 ++ VERSION_ID=39 00:01:41.262 ++ VERSION_CODENAME= 00:01:41.262 ++ PLATFORM_ID=platform:f39 00:01:41.262 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:41.262 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:41.262 ++ LOGO=fedora-logo-icon 00:01:41.262 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:41.262 ++ HOME_URL=https://fedoraproject.org/ 00:01:41.262 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:41.262 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:41.262 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:41.262 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:41.262 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:41.262 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:41.262 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:41.262 ++ SUPPORT_END=2024-11-12 00:01:41.262 ++ VARIANT='Cloud Edition' 00:01:41.262 ++ VARIANT_ID=cloud 00:01:41.262 + uname -a 00:01:41.262 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:41.262 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:41.521 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:41.521 Hugepages 00:01:41.521 node hugesize free / total 00:01:41.521 node0 1048576kB 0 / 0 00:01:41.521 node0 2048kB 0 / 0 00:01:41.521 00:01:41.521 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:41.521 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:41.781 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:41.781 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:41.781 + rm -f /tmp/spdk-ld-path 00:01:41.781 + source autorun-spdk.conf 00:01:41.781 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.781 ++ SPDK_TEST_NVMF=1 00:01:41.781 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.781 ++ SPDK_TEST_URING=1 00:01:41.781 ++ SPDK_TEST_VFIOUSER=1 00:01:41.781 ++ SPDK_TEST_USDT=1 00:01:41.781 ++ SPDK_RUN_ASAN=1 00:01:41.781 ++ SPDK_RUN_UBSAN=1 00:01:41.781 ++ NET_TYPE=virt 00:01:41.781 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.781 ++ RUN_NIGHTLY=1 00:01:41.781 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:41.781 + [[ -n '' ]] 00:01:41.781 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:41.781 + for M in /var/spdk/build-*-manifest.txt 00:01:41.781 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:41.781 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:41.781 + for M in /var/spdk/build-*-manifest.txt 00:01:41.781 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:41.781 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:41.781 + for M in /var/spdk/build-*-manifest.txt 00:01:41.781 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:41.781 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:41.781 ++ uname 00:01:41.781 + [[ Linux == \L\i\n\u\x ]] 00:01:41.781 + sudo dmesg -T 00:01:41.781 + sudo dmesg --clear 00:01:41.781 + dmesg_pid=5257 00:01:41.781 + sudo dmesg -Tw 00:01:41.781 + [[ Fedora Linux == FreeBSD ]] 00:01:41.781 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.781 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.781 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:41.781 + [[ -x /usr/src/fio-static/fio ]] 00:01:41.781 + export FIO_BIN=/usr/src/fio-static/fio 00:01:41.781 + FIO_BIN=/usr/src/fio-static/fio 00:01:41.781 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:41.781 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:41.781 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:41.781 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.781 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.781 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:41.781 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.781 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.781 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:41.781 Test configuration: 00:01:41.781 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.781 SPDK_TEST_NVMF=1 00:01:41.781 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.781 SPDK_TEST_URING=1 00:01:41.781 SPDK_TEST_VFIOUSER=1 00:01:41.781 SPDK_TEST_USDT=1 00:01:41.781 SPDK_RUN_ASAN=1 00:01:41.781 SPDK_RUN_UBSAN=1 00:01:41.781 NET_TYPE=virt 00:01:41.781 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.781 RUN_NIGHTLY=1 01:14:37 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:41.781 01:14:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:41.781 01:14:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:41.781 01:14:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:41.781 01:14:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:41.781 01:14:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:41.781 01:14:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.781 01:14:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.781 01:14:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.781 01:14:37 -- paths/export.sh@5 -- $ export PATH 00:01:41.781 01:14:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.781 01:14:37 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:42.040 01:14:37 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:42.040 01:14:37 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727486077.XXXXXX 00:01:42.040 01:14:37 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727486077.J5eFOR 00:01:42.040 01:14:37 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:42.040 01:14:37 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:01:42.040 01:14:37 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:42.040 01:14:37 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:42.040 01:14:37 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:42.040 01:14:37 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:42.040 01:14:37 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:42.040 01:14:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.040 01:14:37 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:01:42.040 01:14:37 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:42.040 01:14:37 -- pm/common@17 -- $ local monitor 00:01:42.040 01:14:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.040 01:14:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.040 01:14:37 -- pm/common@25 -- $ sleep 1 00:01:42.040 01:14:37 -- pm/common@21 -- $ date +%s 00:01:42.040 01:14:37 -- pm/common@21 -- $ date +%s 00:01:42.040 01:14:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727486077 00:01:42.040 01:14:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727486077 00:01:42.040 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727486077_collect-cpu-load.pm.log 00:01:42.040 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727486077_collect-vmstat.pm.log 00:01:42.977 01:14:38 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:42.977 01:14:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:42.977 01:14:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:42.977 01:14:38 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:42.977 01:14:38 -- spdk/autobuild.sh@16 -- $ date -u 00:01:42.977 Sat Sep 28 01:14:38 AM UTC 2024 00:01:42.977 01:14:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:42.977 v25.01-pre-17-g09cc66129 00:01:42.977 01:14:38 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:42.977 01:14:38 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:42.977 01:14:38 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:42.977 01:14:38 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:42.977 01:14:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.977 ************************************ 00:01:42.977 START TEST asan 00:01:42.977 ************************************ 00:01:42.977 using asan 00:01:42.977 01:14:38 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:42.977 00:01:42.977 real 0m0.000s 00:01:42.977 user 0m0.000s 00:01:42.977 sys 0m0.000s 00:01:42.977 ************************************ 00:01:42.977 END TEST asan 00:01:42.977 ************************************ 00:01:42.977 01:14:38 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:42.977 01:14:38 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:42.977 01:14:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:42.977 01:14:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:42.977 01:14:38 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:42.977 01:14:38 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:42.978 01:14:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.978 ************************************ 00:01:42.978 START TEST ubsan 00:01:42.978 ************************************ 00:01:42.978 using ubsan 00:01:42.978 01:14:38 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:42.978 00:01:42.978 real 0m0.000s 00:01:42.978 user 0m0.000s 00:01:42.978 sys 0m0.000s 00:01:42.978 01:14:38 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:42.978 ************************************ 00:01:42.978 01:14:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:42.978 END TEST ubsan 00:01:42.978 ************************************ 00:01:42.978 01:14:38 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:42.978 01:14:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:42.978 01:14:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:42.978 01:14:38 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:42.978 01:14:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:42.978 01:14:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:42.978 01:14:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:42.978 01:14:38 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:42.978 01:14:38 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:01:43.237 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:43.237 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:43.805 Using 'verbs' RDMA provider 00:01:56.951 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:11.826 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:11.826 Creating mk/config.mk...done. 00:02:11.826 Creating mk/cc.flags.mk...done. 00:02:11.826 Type 'make' to build. 00:02:11.826 01:15:06 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:11.826 01:15:06 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:11.826 01:15:06 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:11.826 01:15:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.826 ************************************ 00:02:11.826 START TEST make 00:02:11.826 ************************************ 00:02:11.826 01:15:06 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:11.826 make[1]: Nothing to be done for 'all'. 00:02:12.084 The Meson build system 00:02:12.084 Version: 1.5.0 00:02:12.085 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:12.085 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:12.085 Build type: native build 00:02:12.085 Project name: libvfio-user 00:02:12.085 Project version: 0.0.1 00:02:12.085 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:12.085 C linker for the host machine: cc ld.bfd 2.40-14 00:02:12.085 Host machine cpu family: x86_64 00:02:12.085 Host machine cpu: x86_64 00:02:12.085 Run-time dependency threads found: YES 00:02:12.085 Library dl found: YES 00:02:12.085 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:12.085 Run-time dependency json-c found: YES 0.17 00:02:12.085 Run-time dependency cmocka found: YES 1.1.7 00:02:12.085 Program pytest-3 found: NO 00:02:12.085 Program flake8 found: NO 00:02:12.085 Program misspell-fixer found: NO 00:02:12.085 Program restructuredtext-lint found: NO 00:02:12.085 Program valgrind found: YES (/usr/bin/valgrind) 00:02:12.085 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:12.085 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:12.085 Compiler for C supports arguments -Wwrite-strings: YES 00:02:12.085 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:12.085 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:12.085 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:12.085 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:12.085 Build targets in project: 8 00:02:12.085 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:12.085 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:12.085 00:02:12.085 libvfio-user 0.0.1 00:02:12.085 00:02:12.085 User defined options 00:02:12.085 buildtype : debug 00:02:12.085 default_library: shared 00:02:12.085 libdir : /usr/local/lib 00:02:12.085 00:02:12.085 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.651 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:12.651 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:12.910 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:12.910 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:12.910 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:12.910 [5/37] Compiling C object samples/client.p/client.c.o 00:02:12.910 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:12.910 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:12.910 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:12.910 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:12.910 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:12.910 [11/37] Compiling C object samples/null.p/null.c.o 00:02:12.910 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:12.910 [13/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:12.910 [14/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:12.910 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:12.910 [16/37] Linking target samples/client 00:02:12.910 [17/37] Compiling C object samples/server.p/server.c.o 00:02:12.910 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:12.910 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:12.910 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:13.185 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:13.185 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:13.185 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:13.185 [24/37] Linking target lib/libvfio-user.so.0.0.1 00:02:13.185 [25/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:13.185 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:13.185 [27/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:13.185 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:13.185 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:13.185 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:13.185 [31/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:13.185 [32/37] Linking target samples/server 00:02:13.185 [33/37] Linking target samples/shadow_ioeventfd_server 00:02:13.185 [34/37] Linking target samples/lspci 00:02:13.185 [35/37] Linking target samples/null 00:02:13.185 [36/37] Linking target test/unit_tests 00:02:13.185 [37/37] Linking target samples/gpio-pci-idio-16 00:02:13.185 INFO: autodetecting backend as ninja 00:02:13.185 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:13.458 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:13.717 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:13.717 ninja: no work to do. 00:02:23.687 The Meson build system 00:02:23.687 Version: 1.5.0 00:02:23.687 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:23.687 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:23.687 Build type: native build 00:02:23.687 Program cat found: YES (/usr/bin/cat) 00:02:23.687 Project name: DPDK 00:02:23.687 Project version: 24.03.0 00:02:23.687 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:23.687 C linker for the host machine: cc ld.bfd 2.40-14 00:02:23.687 Host machine cpu family: x86_64 00:02:23.687 Host machine cpu: x86_64 00:02:23.687 Message: ## Building in Developer Mode ## 00:02:23.687 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:23.687 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:23.687 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:23.687 Program python3 found: YES (/usr/bin/python3) 00:02:23.687 Program cat found: YES (/usr/bin/cat) 00:02:23.687 Compiler for C supports arguments -march=native: YES 00:02:23.687 Checking for size of "void *" : 8 00:02:23.687 Checking for size of "void *" : 8 (cached) 00:02:23.687 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:23.687 Library m found: YES 00:02:23.687 Library numa found: YES 00:02:23.687 Has header "numaif.h" : YES 00:02:23.687 Library fdt found: NO 00:02:23.687 Library execinfo found: NO 00:02:23.687 Has header "execinfo.h" : YES 00:02:23.687 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:23.687 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:23.687 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:23.687 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:23.687 Run-time dependency openssl found: YES 3.1.1 00:02:23.687 Run-time dependency libpcap found: YES 1.10.4 00:02:23.687 Has header "pcap.h" with dependency libpcap: YES 00:02:23.687 Compiler for C supports arguments -Wcast-qual: YES 00:02:23.687 Compiler for C supports arguments -Wdeprecated: YES 00:02:23.687 Compiler for C supports arguments -Wformat: YES 00:02:23.687 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:23.687 Compiler for C supports arguments -Wformat-security: NO 00:02:23.687 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:23.687 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:23.687 Compiler for C supports arguments -Wnested-externs: YES 00:02:23.687 Compiler for C supports arguments -Wold-style-definition: YES 00:02:23.687 Compiler for C supports arguments -Wpointer-arith: YES 00:02:23.687 Compiler for C supports arguments -Wsign-compare: YES 00:02:23.687 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:23.687 Compiler for C supports arguments -Wundef: YES 00:02:23.687 Compiler for C supports arguments -Wwrite-strings: YES 00:02:23.687 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:23.687 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:23.687 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:23.687 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:23.687 Program objdump found: YES (/usr/bin/objdump) 00:02:23.687 Compiler for C supports arguments -mavx512f: YES 00:02:23.687 Checking if "AVX512 checking" compiles: YES 00:02:23.687 Fetching value of define "__SSE4_2__" : 1 00:02:23.687 Fetching value of define "__AES__" : 1 00:02:23.687 Fetching value of define "__AVX__" : 1 00:02:23.687 Fetching value of define "__AVX2__" : 1 00:02:23.687 Fetching value of define "__AVX512BW__" : (undefined) 00:02:23.687 Fetching value of define "__AVX512CD__" : (undefined) 00:02:23.687 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:23.687 Fetching value of define "__AVX512F__" : (undefined) 00:02:23.687 Fetching value of define "__AVX512VL__" : (undefined) 00:02:23.687 Fetching value of define "__PCLMUL__" : 1 00:02:23.687 Fetching value of define "__RDRND__" : 1 00:02:23.687 Fetching value of define "__RDSEED__" : 1 00:02:23.687 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:23.687 Fetching value of define "__znver1__" : (undefined) 00:02:23.687 Fetching value of define "__znver2__" : (undefined) 00:02:23.687 Fetching value of define "__znver3__" : (undefined) 00:02:23.687 Fetching value of define "__znver4__" : (undefined) 00:02:23.687 Library asan found: YES 00:02:23.687 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:23.687 Message: lib/log: Defining dependency "log" 00:02:23.687 Message: lib/kvargs: Defining dependency "kvargs" 00:02:23.687 Message: lib/telemetry: Defining dependency "telemetry" 00:02:23.687 Library rt found: YES 00:02:23.687 Checking for function "getentropy" : NO 00:02:23.687 Message: lib/eal: Defining dependency "eal" 00:02:23.687 Message: lib/ring: Defining dependency "ring" 00:02:23.687 Message: lib/rcu: Defining dependency "rcu" 00:02:23.687 Message: lib/mempool: Defining dependency "mempool" 00:02:23.687 Message: lib/mbuf: Defining dependency "mbuf" 00:02:23.687 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:23.687 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:23.687 Compiler for C supports arguments -mpclmul: YES 00:02:23.687 Compiler for C supports arguments -maes: YES 00:02:23.687 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.687 Compiler for C supports arguments -mavx512bw: YES 00:02:23.687 Compiler for C supports arguments -mavx512dq: YES 00:02:23.687 Compiler for C supports arguments -mavx512vl: YES 00:02:23.687 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:23.687 Compiler for C supports arguments -mavx2: YES 00:02:23.687 Compiler for C supports arguments -mavx: YES 00:02:23.687 Message: lib/net: Defining dependency "net" 00:02:23.687 Message: lib/meter: Defining dependency "meter" 00:02:23.687 Message: lib/ethdev: Defining dependency "ethdev" 00:02:23.687 Message: lib/pci: Defining dependency "pci" 00:02:23.687 Message: lib/cmdline: Defining dependency "cmdline" 00:02:23.687 Message: lib/hash: Defining dependency "hash" 00:02:23.687 Message: lib/timer: Defining dependency "timer" 00:02:23.687 Message: lib/compressdev: Defining dependency "compressdev" 00:02:23.687 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:23.687 Message: lib/dmadev: Defining dependency "dmadev" 00:02:23.687 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:23.687 Message: lib/power: Defining dependency "power" 00:02:23.687 Message: lib/reorder: Defining dependency "reorder" 00:02:23.687 Message: lib/security: Defining dependency "security" 00:02:23.687 Has header "linux/userfaultfd.h" : YES 00:02:23.687 Has header "linux/vduse.h" : YES 00:02:23.687 Message: lib/vhost: Defining dependency "vhost" 00:02:23.687 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:23.687 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:23.687 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:23.687 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:23.687 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:23.687 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:23.687 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:23.687 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:23.687 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:23.687 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:23.687 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:23.687 Configuring doxy-api-html.conf using configuration 00:02:23.687 Configuring doxy-api-man.conf using configuration 00:02:23.687 Program mandb found: YES (/usr/bin/mandb) 00:02:23.687 Program sphinx-build found: NO 00:02:23.687 Configuring rte_build_config.h using configuration 00:02:23.687 Message: 00:02:23.687 ================= 00:02:23.687 Applications Enabled 00:02:23.687 ================= 00:02:23.687 00:02:23.687 apps: 00:02:23.687 00:02:23.687 00:02:23.687 Message: 00:02:23.688 ================= 00:02:23.688 Libraries Enabled 00:02:23.688 ================= 00:02:23.688 00:02:23.688 libs: 00:02:23.688 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:23.688 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:23.688 cryptodev, dmadev, power, reorder, security, vhost, 00:02:23.688 00:02:23.688 Message: 00:02:23.688 =============== 00:02:23.688 Drivers Enabled 00:02:23.688 =============== 00:02:23.688 00:02:23.688 common: 00:02:23.688 00:02:23.688 bus: 00:02:23.688 pci, vdev, 00:02:23.688 mempool: 00:02:23.688 ring, 00:02:23.688 dma: 00:02:23.688 00:02:23.688 net: 00:02:23.688 00:02:23.688 crypto: 00:02:23.688 00:02:23.688 compress: 00:02:23.688 00:02:23.688 vdpa: 00:02:23.688 00:02:23.688 00:02:23.688 Message: 00:02:23.688 ================= 00:02:23.688 Content Skipped 00:02:23.688 ================= 00:02:23.688 00:02:23.688 apps: 00:02:23.688 dumpcap: explicitly disabled via build config 00:02:23.688 graph: explicitly disabled via build config 00:02:23.688 pdump: explicitly disabled via build config 00:02:23.688 proc-info: explicitly disabled via build config 00:02:23.688 test-acl: explicitly disabled via build config 00:02:23.688 test-bbdev: explicitly disabled via build config 00:02:23.688 test-cmdline: explicitly disabled via build config 00:02:23.688 test-compress-perf: explicitly disabled via build config 00:02:23.688 test-crypto-perf: explicitly disabled via build config 00:02:23.688 test-dma-perf: explicitly disabled via build config 00:02:23.688 test-eventdev: explicitly disabled via build config 00:02:23.688 test-fib: explicitly disabled via build config 00:02:23.688 test-flow-perf: explicitly disabled via build config 00:02:23.688 test-gpudev: explicitly disabled via build config 00:02:23.688 test-mldev: explicitly disabled via build config 00:02:23.688 test-pipeline: explicitly disabled via build config 00:02:23.688 test-pmd: explicitly disabled via build config 00:02:23.688 test-regex: explicitly disabled via build config 00:02:23.688 test-sad: explicitly disabled via build config 00:02:23.688 test-security-perf: explicitly disabled via build config 00:02:23.688 00:02:23.688 libs: 00:02:23.688 argparse: explicitly disabled via build config 00:02:23.688 metrics: explicitly disabled via build config 00:02:23.688 acl: explicitly disabled via build config 00:02:23.688 bbdev: explicitly disabled via build config 00:02:23.688 bitratestats: explicitly disabled via build config 00:02:23.688 bpf: explicitly disabled via build config 00:02:23.688 cfgfile: explicitly disabled via build config 00:02:23.688 distributor: explicitly disabled via build config 00:02:23.688 efd: explicitly disabled via build config 00:02:23.688 eventdev: explicitly disabled via build config 00:02:23.688 dispatcher: explicitly disabled via build config 00:02:23.688 gpudev: explicitly disabled via build config 00:02:23.688 gro: explicitly disabled via build config 00:02:23.688 gso: explicitly disabled via build config 00:02:23.688 ip_frag: explicitly disabled via build config 00:02:23.688 jobstats: explicitly disabled via build config 00:02:23.688 latencystats: explicitly disabled via build config 00:02:23.688 lpm: explicitly disabled via build config 00:02:23.688 member: explicitly disabled via build config 00:02:23.688 pcapng: explicitly disabled via build config 00:02:23.688 rawdev: explicitly disabled via build config 00:02:23.688 regexdev: explicitly disabled via build config 00:02:23.688 mldev: explicitly disabled via build config 00:02:23.688 rib: explicitly disabled via build config 00:02:23.688 sched: explicitly disabled via build config 00:02:23.688 stack: explicitly disabled via build config 00:02:23.688 ipsec: explicitly disabled via build config 00:02:23.688 pdcp: explicitly disabled via build config 00:02:23.688 fib: explicitly disabled via build config 00:02:23.688 port: explicitly disabled via build config 00:02:23.688 pdump: explicitly disabled via build config 00:02:23.688 table: explicitly disabled via build config 00:02:23.688 pipeline: explicitly disabled via build config 00:02:23.688 graph: explicitly disabled via build config 00:02:23.688 node: explicitly disabled via build config 00:02:23.688 00:02:23.688 drivers: 00:02:23.688 common/cpt: not in enabled drivers build config 00:02:23.688 common/dpaax: not in enabled drivers build config 00:02:23.688 common/iavf: not in enabled drivers build config 00:02:23.688 common/idpf: not in enabled drivers build config 00:02:23.688 common/ionic: not in enabled drivers build config 00:02:23.688 common/mvep: not in enabled drivers build config 00:02:23.688 common/octeontx: not in enabled drivers build config 00:02:23.688 bus/auxiliary: not in enabled drivers build config 00:02:23.688 bus/cdx: not in enabled drivers build config 00:02:23.688 bus/dpaa: not in enabled drivers build config 00:02:23.688 bus/fslmc: not in enabled drivers build config 00:02:23.688 bus/ifpga: not in enabled drivers build config 00:02:23.688 bus/platform: not in enabled drivers build config 00:02:23.688 bus/uacce: not in enabled drivers build config 00:02:23.688 bus/vmbus: not in enabled drivers build config 00:02:23.688 common/cnxk: not in enabled drivers build config 00:02:23.688 common/mlx5: not in enabled drivers build config 00:02:23.688 common/nfp: not in enabled drivers build config 00:02:23.688 common/nitrox: not in enabled drivers build config 00:02:23.688 common/qat: not in enabled drivers build config 00:02:23.688 common/sfc_efx: not in enabled drivers build config 00:02:23.688 mempool/bucket: not in enabled drivers build config 00:02:23.688 mempool/cnxk: not in enabled drivers build config 00:02:23.688 mempool/dpaa: not in enabled drivers build config 00:02:23.688 mempool/dpaa2: not in enabled drivers build config 00:02:23.688 mempool/octeontx: not in enabled drivers build config 00:02:23.688 mempool/stack: not in enabled drivers build config 00:02:23.688 dma/cnxk: not in enabled drivers build config 00:02:23.688 dma/dpaa: not in enabled drivers build config 00:02:23.688 dma/dpaa2: not in enabled drivers build config 00:02:23.688 dma/hisilicon: not in enabled drivers build config 00:02:23.688 dma/idxd: not in enabled drivers build config 00:02:23.688 dma/ioat: not in enabled drivers build config 00:02:23.688 dma/skeleton: not in enabled drivers build config 00:02:23.688 net/af_packet: not in enabled drivers build config 00:02:23.688 net/af_xdp: not in enabled drivers build config 00:02:23.688 net/ark: not in enabled drivers build config 00:02:23.688 net/atlantic: not in enabled drivers build config 00:02:23.688 net/avp: not in enabled drivers build config 00:02:23.688 net/axgbe: not in enabled drivers build config 00:02:23.688 net/bnx2x: not in enabled drivers build config 00:02:23.688 net/bnxt: not in enabled drivers build config 00:02:23.688 net/bonding: not in enabled drivers build config 00:02:23.688 net/cnxk: not in enabled drivers build config 00:02:23.688 net/cpfl: not in enabled drivers build config 00:02:23.688 net/cxgbe: not in enabled drivers build config 00:02:23.688 net/dpaa: not in enabled drivers build config 00:02:23.688 net/dpaa2: not in enabled drivers build config 00:02:23.688 net/e1000: not in enabled drivers build config 00:02:23.688 net/ena: not in enabled drivers build config 00:02:23.688 net/enetc: not in enabled drivers build config 00:02:23.688 net/enetfec: not in enabled drivers build config 00:02:23.688 net/enic: not in enabled drivers build config 00:02:23.688 net/failsafe: not in enabled drivers build config 00:02:23.688 net/fm10k: not in enabled drivers build config 00:02:23.688 net/gve: not in enabled drivers build config 00:02:23.688 net/hinic: not in enabled drivers build config 00:02:23.688 net/hns3: not in enabled drivers build config 00:02:23.688 net/i40e: not in enabled drivers build config 00:02:23.688 net/iavf: not in enabled drivers build config 00:02:23.688 net/ice: not in enabled drivers build config 00:02:23.688 net/idpf: not in enabled drivers build config 00:02:23.688 net/igc: not in enabled drivers build config 00:02:23.688 net/ionic: not in enabled drivers build config 00:02:23.688 net/ipn3ke: not in enabled drivers build config 00:02:23.688 net/ixgbe: not in enabled drivers build config 00:02:23.688 net/mana: not in enabled drivers build config 00:02:23.688 net/memif: not in enabled drivers build config 00:02:23.688 net/mlx4: not in enabled drivers build config 00:02:23.688 net/mlx5: not in enabled drivers build config 00:02:23.688 net/mvneta: not in enabled drivers build config 00:02:23.688 net/mvpp2: not in enabled drivers build config 00:02:23.688 net/netvsc: not in enabled drivers build config 00:02:23.688 net/nfb: not in enabled drivers build config 00:02:23.688 net/nfp: not in enabled drivers build config 00:02:23.688 net/ngbe: not in enabled drivers build config 00:02:23.688 net/null: not in enabled drivers build config 00:02:23.688 net/octeontx: not in enabled drivers build config 00:02:23.688 net/octeon_ep: not in enabled drivers build config 00:02:23.688 net/pcap: not in enabled drivers build config 00:02:23.688 net/pfe: not in enabled drivers build config 00:02:23.688 net/qede: not in enabled drivers build config 00:02:23.688 net/ring: not in enabled drivers build config 00:02:23.688 net/sfc: not in enabled drivers build config 00:02:23.688 net/softnic: not in enabled drivers build config 00:02:23.688 net/tap: not in enabled drivers build config 00:02:23.688 net/thunderx: not in enabled drivers build config 00:02:23.688 net/txgbe: not in enabled drivers build config 00:02:23.688 net/vdev_netvsc: not in enabled drivers build config 00:02:23.688 net/vhost: not in enabled drivers build config 00:02:23.688 net/virtio: not in enabled drivers build config 00:02:23.688 net/vmxnet3: not in enabled drivers build config 00:02:23.688 raw/*: missing internal dependency, "rawdev" 00:02:23.688 crypto/armv8: not in enabled drivers build config 00:02:23.688 crypto/bcmfs: not in enabled drivers build config 00:02:23.688 crypto/caam_jr: not in enabled drivers build config 00:02:23.688 crypto/ccp: not in enabled drivers build config 00:02:23.688 crypto/cnxk: not in enabled drivers build config 00:02:23.688 crypto/dpaa_sec: not in enabled drivers build config 00:02:23.688 crypto/dpaa2_sec: not in enabled drivers build config 00:02:23.688 crypto/ipsec_mb: not in enabled drivers build config 00:02:23.688 crypto/mlx5: not in enabled drivers build config 00:02:23.688 crypto/mvsam: not in enabled drivers build config 00:02:23.688 crypto/nitrox: not in enabled drivers build config 00:02:23.688 crypto/null: not in enabled drivers build config 00:02:23.688 crypto/octeontx: not in enabled drivers build config 00:02:23.688 crypto/openssl: not in enabled drivers build config 00:02:23.689 crypto/scheduler: not in enabled drivers build config 00:02:23.689 crypto/uadk: not in enabled drivers build config 00:02:23.689 crypto/virtio: not in enabled drivers build config 00:02:23.689 compress/isal: not in enabled drivers build config 00:02:23.689 compress/mlx5: not in enabled drivers build config 00:02:23.689 compress/nitrox: not in enabled drivers build config 00:02:23.689 compress/octeontx: not in enabled drivers build config 00:02:23.689 compress/zlib: not in enabled drivers build config 00:02:23.689 regex/*: missing internal dependency, "regexdev" 00:02:23.689 ml/*: missing internal dependency, "mldev" 00:02:23.689 vdpa/ifc: not in enabled drivers build config 00:02:23.689 vdpa/mlx5: not in enabled drivers build config 00:02:23.689 vdpa/nfp: not in enabled drivers build config 00:02:23.689 vdpa/sfc: not in enabled drivers build config 00:02:23.689 event/*: missing internal dependency, "eventdev" 00:02:23.689 baseband/*: missing internal dependency, "bbdev" 00:02:23.689 gpu/*: missing internal dependency, "gpudev" 00:02:23.689 00:02:23.689 00:02:24.254 Build targets in project: 85 00:02:24.254 00:02:24.254 DPDK 24.03.0 00:02:24.254 00:02:24.254 User defined options 00:02:24.254 buildtype : debug 00:02:24.254 default_library : shared 00:02:24.254 libdir : lib 00:02:24.254 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:24.255 b_sanitize : address 00:02:24.255 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:24.255 c_link_args : 00:02:24.255 cpu_instruction_set: native 00:02:24.255 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:24.255 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:24.255 enable_docs : false 00:02:24.255 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:24.255 enable_kmods : false 00:02:24.255 max_lcores : 128 00:02:24.255 tests : false 00:02:24.255 00:02:24.255 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:24.821 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:25.079 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:25.079 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:25.079 [3/268] Linking static target lib/librte_kvargs.a 00:02:25.079 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:25.079 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:25.079 [6/268] Linking static target lib/librte_log.a 00:02:25.646 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.646 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:25.646 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:25.646 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:25.646 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:25.905 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:25.905 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:25.905 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:25.905 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.905 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:26.163 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:26.163 [18/268] Linking target lib/librte_log.so.24.1 00:02:26.163 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:26.163 [20/268] Linking static target lib/librte_telemetry.a 00:02:26.421 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:26.421 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:26.421 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:26.680 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:26.680 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:26.680 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:26.680 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:26.680 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:26.939 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:26.939 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.939 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:26.939 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:26.939 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:27.198 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:27.198 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:27.456 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:27.456 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:27.720 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:27.720 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:27.720 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:27.720 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:27.720 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:27.720 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:27.720 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:27.993 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:27.993 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:27.993 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:28.251 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:28.251 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:28.252 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:28.510 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:28.510 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:28.769 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:28.769 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:28.769 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:29.028 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:29.028 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:29.028 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:29.028 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:29.028 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:29.288 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:29.288 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:29.288 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:29.288 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:29.547 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:29.547 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:29.547 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:29.805 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:29.806 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:29.806 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:29.806 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:29.806 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:30.064 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:30.064 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:30.064 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:30.321 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:30.322 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:30.322 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:30.322 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:30.322 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:30.322 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:30.580 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:30.580 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:30.839 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:30.839 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:31.097 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:31.098 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:31.098 [88/268] Linking static target lib/librte_ring.a 00:02:31.098 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:31.098 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:31.098 [91/268] Linking static target lib/librte_rcu.a 00:02:31.098 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:31.356 [93/268] Linking static target lib/librte_eal.a 00:02:31.356 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:31.356 [95/268] Linking static target lib/librte_mempool.a 00:02:31.356 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:31.356 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:31.356 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:31.615 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:31.615 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.615 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.873 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:32.132 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:32.132 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:32.132 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:32.132 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:32.391 [107/268] Linking static target lib/librte_net.a 00:02:32.391 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:32.391 [109/268] Linking static target lib/librte_meter.a 00:02:32.649 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:32.649 [111/268] Linking static target lib/librte_mbuf.a 00:02:32.649 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.649 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:32.907 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.907 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:32.907 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:32.907 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.165 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:33.424 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:33.682 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:33.682 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.682 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:33.682 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:33.941 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:34.199 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:34.199 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:34.199 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:34.199 [128/268] Linking static target lib/librte_pci.a 00:02:34.199 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:34.199 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:34.457 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:34.457 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:34.457 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:34.457 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:34.457 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:34.716 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.716 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:34.716 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:34.716 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:34.716 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:34.716 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:34.716 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:34.974 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:34.974 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:34.974 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:34.974 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:34.974 [147/268] Linking static target lib/librte_cmdline.a 00:02:35.541 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:35.541 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:35.541 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:35.541 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:35.799 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:35.799 [153/268] Linking static target lib/librte_timer.a 00:02:36.058 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:36.317 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:36.317 [156/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.317 [157/268] Linking static target lib/librte_hash.a 00:02:36.575 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:36.575 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:36.575 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:36.575 [161/268] Linking static target lib/librte_ethdev.a 00:02:36.575 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:36.575 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:36.575 [164/268] Linking static target lib/librte_compressdev.a 00:02:36.833 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:36.833 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.091 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:37.091 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:37.091 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:37.349 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:37.349 [171/268] Linking static target lib/librte_dmadev.a 00:02:37.349 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:37.349 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:37.607 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.864 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.864 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:37.864 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:38.152 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:38.152 [179/268] Linking static target lib/librte_cryptodev.a 00:02:38.152 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:38.152 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.410 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:38.410 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:38.410 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:38.668 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:38.668 [186/268] Linking static target lib/librte_power.a 00:02:38.926 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:38.926 [188/268] Linking static target lib/librte_reorder.a 00:02:38.926 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:38.926 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:39.184 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:39.184 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:39.184 [193/268] Linking static target lib/librte_security.a 00:02:39.443 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.009 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:40.009 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.009 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.267 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:40.267 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:40.267 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:40.524 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.782 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:41.040 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:41.040 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:41.040 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:41.040 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:41.299 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:41.556 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:41.556 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:41.813 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:41.813 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:41.813 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:41.813 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.813 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.813 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:42.071 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:42.071 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:42.071 [218/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:42.071 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:42.071 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:42.071 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:42.329 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.329 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:42.329 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.329 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.329 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:42.587 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.153 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:43.411 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.669 [230/268] Linking target lib/librte_eal.so.24.1 00:02:43.669 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:43.669 [232/268] Linking target lib/librte_meter.so.24.1 00:02:43.669 [233/268] Linking target lib/librte_ring.so.24.1 00:02:43.669 [234/268] Linking target lib/librte_timer.so.24.1 00:02:43.669 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:43.669 [236/268] Linking target lib/librte_pci.so.24.1 00:02:43.669 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:43.927 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:43.927 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:43.927 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:43.927 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:43.927 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:43.927 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:43.927 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:43.927 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:44.185 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:44.185 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:44.185 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:44.185 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:44.185 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:44.443 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:44.443 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:02:44.443 [253/268] Linking target lib/librte_net.so.24.1 00:02:44.443 [254/268] Linking target lib/librte_reorder.so.24.1 00:02:44.443 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:44.443 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:44.443 [257/268] Linking target lib/librte_security.so.24.1 00:02:44.443 [258/268] Linking target lib/librte_hash.so.24.1 00:02:44.443 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:44.701 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:44.701 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.701 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:44.959 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:44.959 [264/268] Linking target lib/librte_power.so.24.1 00:02:47.491 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:47.750 [266/268] Linking static target lib/librte_vhost.a 00:02:49.130 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.130 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:49.130 INFO: autodetecting backend as ninja 00:02:49.130 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:11.064 CC lib/log/log.o 00:03:11.064 CC lib/log/log_flags.o 00:03:11.064 CC lib/log/log_deprecated.o 00:03:11.064 CC lib/ut_mock/mock.o 00:03:11.064 CC lib/ut/ut.o 00:03:11.064 LIB libspdk_log.a 00:03:11.064 LIB libspdk_ut.a 00:03:11.064 LIB libspdk_ut_mock.a 00:03:11.064 SO libspdk_log.so.7.0 00:03:11.064 SO libspdk_ut.so.2.0 00:03:11.064 SO libspdk_ut_mock.so.6.0 00:03:11.064 SYMLINK libspdk_ut_mock.so 00:03:11.064 SYMLINK libspdk_ut.so 00:03:11.064 SYMLINK libspdk_log.so 00:03:11.064 CXX lib/trace_parser/trace.o 00:03:11.064 CC lib/dma/dma.o 00:03:11.064 CC lib/ioat/ioat.o 00:03:11.064 CC lib/util/base64.o 00:03:11.064 CC lib/util/bit_array.o 00:03:11.064 CC lib/util/cpuset.o 00:03:11.064 CC lib/util/crc32.o 00:03:11.064 CC lib/util/crc32c.o 00:03:11.064 CC lib/util/crc16.o 00:03:11.064 CC lib/vfio_user/host/vfio_user_pci.o 00:03:11.064 CC lib/vfio_user/host/vfio_user.o 00:03:11.064 CC lib/util/crc32_ieee.o 00:03:11.064 CC lib/util/crc64.o 00:03:11.064 LIB libspdk_dma.a 00:03:11.064 SO libspdk_dma.so.5.0 00:03:11.064 CC lib/util/dif.o 00:03:11.064 CC lib/util/fd.o 00:03:11.064 SYMLINK libspdk_dma.so 00:03:11.064 CC lib/util/fd_group.o 00:03:11.064 CC lib/util/file.o 00:03:11.064 CC lib/util/hexlify.o 00:03:11.064 CC lib/util/iov.o 00:03:11.064 LIB libspdk_ioat.a 00:03:11.064 CC lib/util/math.o 00:03:11.064 CC lib/util/net.o 00:03:11.064 SO libspdk_ioat.so.7.0 00:03:11.064 LIB libspdk_vfio_user.a 00:03:11.064 CC lib/util/pipe.o 00:03:11.064 SO libspdk_vfio_user.so.5.0 00:03:11.064 SYMLINK libspdk_ioat.so 00:03:11.064 CC lib/util/strerror_tls.o 00:03:11.064 CC lib/util/string.o 00:03:11.064 CC lib/util/uuid.o 00:03:11.064 SYMLINK libspdk_vfio_user.so 00:03:11.064 CC lib/util/xor.o 00:03:11.064 CC lib/util/zipf.o 00:03:11.064 CC lib/util/md5.o 00:03:11.064 LIB libspdk_util.a 00:03:11.064 SO libspdk_util.so.10.0 00:03:11.323 LIB libspdk_trace_parser.a 00:03:11.323 SYMLINK libspdk_util.so 00:03:11.323 SO libspdk_trace_parser.so.6.0 00:03:11.323 SYMLINK libspdk_trace_parser.so 00:03:11.581 CC lib/vmd/vmd.o 00:03:11.581 CC lib/conf/conf.o 00:03:11.581 CC lib/idxd/idxd.o 00:03:11.581 CC lib/vmd/led.o 00:03:11.581 CC lib/json/json_parse.o 00:03:11.581 CC lib/json/json_util.o 00:03:11.581 CC lib/rdma_utils/rdma_utils.o 00:03:11.581 CC lib/idxd/idxd_user.o 00:03:11.581 CC lib/env_dpdk/env.o 00:03:11.581 CC lib/rdma_provider/common.o 00:03:11.839 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:11.839 CC lib/env_dpdk/memory.o 00:03:11.839 LIB libspdk_conf.a 00:03:11.839 CC lib/json/json_write.o 00:03:11.839 CC lib/idxd/idxd_kernel.o 00:03:11.839 SO libspdk_conf.so.6.0 00:03:11.839 CC lib/env_dpdk/pci.o 00:03:11.839 SYMLINK libspdk_conf.so 00:03:11.839 CC lib/env_dpdk/init.o 00:03:11.839 LIB libspdk_rdma_utils.a 00:03:11.839 SO libspdk_rdma_utils.so.1.0 00:03:11.839 LIB libspdk_rdma_provider.a 00:03:11.839 SO libspdk_rdma_provider.so.6.0 00:03:12.097 SYMLINK libspdk_rdma_utils.so 00:03:12.097 CC lib/env_dpdk/threads.o 00:03:12.097 CC lib/env_dpdk/pci_ioat.o 00:03:12.097 SYMLINK libspdk_rdma_provider.so 00:03:12.097 CC lib/env_dpdk/pci_virtio.o 00:03:12.097 LIB libspdk_json.a 00:03:12.097 CC lib/env_dpdk/pci_vmd.o 00:03:12.097 CC lib/env_dpdk/pci_idxd.o 00:03:12.097 SO libspdk_json.so.6.0 00:03:12.097 CC lib/env_dpdk/pci_event.o 00:03:12.356 SYMLINK libspdk_json.so 00:03:12.356 CC lib/env_dpdk/sigbus_handler.o 00:03:12.356 CC lib/env_dpdk/pci_dpdk.o 00:03:12.356 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:12.356 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:12.356 LIB libspdk_idxd.a 00:03:12.356 SO libspdk_idxd.so.12.1 00:03:12.356 LIB libspdk_vmd.a 00:03:12.356 SYMLINK libspdk_idxd.so 00:03:12.356 SO libspdk_vmd.so.6.0 00:03:12.356 CC lib/jsonrpc/jsonrpc_server.o 00:03:12.356 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:12.356 CC lib/jsonrpc/jsonrpc_client.o 00:03:12.356 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:12.615 SYMLINK libspdk_vmd.so 00:03:12.874 LIB libspdk_jsonrpc.a 00:03:12.874 SO libspdk_jsonrpc.so.6.0 00:03:12.874 SYMLINK libspdk_jsonrpc.so 00:03:13.132 CC lib/rpc/rpc.o 00:03:13.389 LIB libspdk_rpc.a 00:03:13.389 SO libspdk_rpc.so.6.0 00:03:13.389 LIB libspdk_env_dpdk.a 00:03:13.647 SYMLINK libspdk_rpc.so 00:03:13.647 SO libspdk_env_dpdk.so.15.0 00:03:13.647 SYMLINK libspdk_env_dpdk.so 00:03:13.647 CC lib/keyring/keyring.o 00:03:13.647 CC lib/trace/trace.o 00:03:13.647 CC lib/keyring/keyring_rpc.o 00:03:13.647 CC lib/trace/trace_flags.o 00:03:13.647 CC lib/trace/trace_rpc.o 00:03:13.647 CC lib/notify/notify.o 00:03:13.647 CC lib/notify/notify_rpc.o 00:03:13.905 LIB libspdk_notify.a 00:03:13.905 SO libspdk_notify.so.6.0 00:03:13.905 LIB libspdk_keyring.a 00:03:14.163 SYMLINK libspdk_notify.so 00:03:14.163 SO libspdk_keyring.so.2.0 00:03:14.163 LIB libspdk_trace.a 00:03:14.163 SO libspdk_trace.so.11.0 00:03:14.163 SYMLINK libspdk_keyring.so 00:03:14.163 SYMLINK libspdk_trace.so 00:03:14.422 CC lib/sock/sock.o 00:03:14.422 CC lib/sock/sock_rpc.o 00:03:14.422 CC lib/thread/thread.o 00:03:14.422 CC lib/thread/iobuf.o 00:03:14.989 LIB libspdk_sock.a 00:03:14.989 SO libspdk_sock.so.10.0 00:03:15.248 SYMLINK libspdk_sock.so 00:03:15.507 CC lib/nvme/nvme_ctrlr.o 00:03:15.507 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:15.507 CC lib/nvme/nvme_ns_cmd.o 00:03:15.507 CC lib/nvme/nvme_fabric.o 00:03:15.507 CC lib/nvme/nvme_ns.o 00:03:15.507 CC lib/nvme/nvme_pcie.o 00:03:15.507 CC lib/nvme/nvme.o 00:03:15.507 CC lib/nvme/nvme_pcie_common.o 00:03:15.507 CC lib/nvme/nvme_qpair.o 00:03:16.441 CC lib/nvme/nvme_quirks.o 00:03:16.441 CC lib/nvme/nvme_transport.o 00:03:16.441 CC lib/nvme/nvme_discovery.o 00:03:16.441 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:16.441 LIB libspdk_thread.a 00:03:16.441 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:16.441 CC lib/nvme/nvme_tcp.o 00:03:16.699 SO libspdk_thread.so.10.1 00:03:16.699 CC lib/nvme/nvme_opal.o 00:03:16.699 SYMLINK libspdk_thread.so 00:03:16.699 CC lib/nvme/nvme_io_msg.o 00:03:16.699 CC lib/nvme/nvme_poll_group.o 00:03:16.957 CC lib/nvme/nvme_zns.o 00:03:17.215 CC lib/nvme/nvme_stubs.o 00:03:17.215 CC lib/nvme/nvme_auth.o 00:03:17.215 CC lib/nvme/nvme_cuse.o 00:03:17.474 CC lib/nvme/nvme_vfio_user.o 00:03:17.474 CC lib/nvme/nvme_rdma.o 00:03:17.474 CC lib/accel/accel.o 00:03:17.474 CC lib/accel/accel_rpc.o 00:03:17.732 CC lib/accel/accel_sw.o 00:03:17.732 CC lib/blob/blobstore.o 00:03:17.732 CC lib/init/json_config.o 00:03:17.991 CC lib/init/subsystem.o 00:03:17.991 CC lib/init/subsystem_rpc.o 00:03:18.251 CC lib/blob/request.o 00:03:18.251 CC lib/blob/zeroes.o 00:03:18.251 CC lib/init/rpc.o 00:03:18.251 CC lib/blob/blob_bs_dev.o 00:03:18.509 LIB libspdk_init.a 00:03:18.509 CC lib/virtio/virtio.o 00:03:18.509 SO libspdk_init.so.6.0 00:03:18.509 CC lib/vfu_tgt/tgt_endpoint.o 00:03:18.509 CC lib/vfu_tgt/tgt_rpc.o 00:03:18.509 CC lib/virtio/virtio_vhost_user.o 00:03:18.509 SYMLINK libspdk_init.so 00:03:18.509 CC lib/virtio/virtio_vfio_user.o 00:03:18.509 CC lib/fsdev/fsdev.o 00:03:18.768 CC lib/virtio/virtio_pci.o 00:03:18.768 CC lib/event/app.o 00:03:18.768 LIB libspdk_accel.a 00:03:18.768 CC lib/event/reactor.o 00:03:18.768 CC lib/event/log_rpc.o 00:03:19.027 SO libspdk_accel.so.16.0 00:03:19.027 LIB libspdk_vfu_tgt.a 00:03:19.027 SO libspdk_vfu_tgt.so.3.0 00:03:19.027 CC lib/event/app_rpc.o 00:03:19.027 SYMLINK libspdk_accel.so 00:03:19.027 CC lib/event/scheduler_static.o 00:03:19.027 SYMLINK libspdk_vfu_tgt.so 00:03:19.027 CC lib/fsdev/fsdev_io.o 00:03:19.027 LIB libspdk_nvme.a 00:03:19.027 LIB libspdk_virtio.a 00:03:19.027 SO libspdk_virtio.so.7.0 00:03:19.027 CC lib/fsdev/fsdev_rpc.o 00:03:19.286 CC lib/bdev/bdev.o 00:03:19.286 SYMLINK libspdk_virtio.so 00:03:19.286 CC lib/bdev/bdev_rpc.o 00:03:19.286 CC lib/bdev/bdev_zone.o 00:03:19.286 CC lib/bdev/part.o 00:03:19.286 SO libspdk_nvme.so.14.0 00:03:19.286 CC lib/bdev/scsi_nvme.o 00:03:19.286 LIB libspdk_event.a 00:03:19.544 SO libspdk_event.so.14.0 00:03:19.544 LIB libspdk_fsdev.a 00:03:19.544 SYMLINK libspdk_event.so 00:03:19.544 SO libspdk_fsdev.so.1.0 00:03:19.544 SYMLINK libspdk_fsdev.so 00:03:19.544 SYMLINK libspdk_nvme.so 00:03:19.803 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:20.748 LIB libspdk_fuse_dispatcher.a 00:03:20.748 SO libspdk_fuse_dispatcher.so.1.0 00:03:20.748 SYMLINK libspdk_fuse_dispatcher.so 00:03:22.141 LIB libspdk_blob.a 00:03:22.141 SO libspdk_blob.so.11.0 00:03:22.141 SYMLINK libspdk_blob.so 00:03:22.400 CC lib/blobfs/blobfs.o 00:03:22.400 CC lib/blobfs/tree.o 00:03:22.400 CC lib/lvol/lvol.o 00:03:22.658 LIB libspdk_bdev.a 00:03:22.658 SO libspdk_bdev.so.16.0 00:03:22.658 SYMLINK libspdk_bdev.so 00:03:22.916 CC lib/scsi/dev.o 00:03:22.916 CC lib/ftl/ftl_core.o 00:03:22.916 CC lib/scsi/lun.o 00:03:22.916 CC lib/ftl/ftl_init.o 00:03:22.916 CC lib/scsi/port.o 00:03:22.916 CC lib/ublk/ublk.o 00:03:22.916 CC lib/nvmf/ctrlr.o 00:03:22.916 CC lib/nbd/nbd.o 00:03:23.174 CC lib/scsi/scsi.o 00:03:23.174 CC lib/scsi/scsi_bdev.o 00:03:23.174 CC lib/ftl/ftl_layout.o 00:03:23.174 CC lib/ftl/ftl_debug.o 00:03:23.433 CC lib/ftl/ftl_io.o 00:03:23.433 LIB libspdk_blobfs.a 00:03:23.433 SO libspdk_blobfs.so.10.0 00:03:23.433 CC lib/ftl/ftl_sb.o 00:03:23.433 SYMLINK libspdk_blobfs.so 00:03:23.433 CC lib/nvmf/ctrlr_discovery.o 00:03:23.433 CC lib/nbd/nbd_rpc.o 00:03:23.691 CC lib/nvmf/ctrlr_bdev.o 00:03:23.691 CC lib/nvmf/subsystem.o 00:03:23.691 LIB libspdk_lvol.a 00:03:23.691 SO libspdk_lvol.so.10.0 00:03:23.691 CC lib/nvmf/nvmf.o 00:03:23.691 CC lib/ftl/ftl_l2p.o 00:03:23.691 LIB libspdk_nbd.a 00:03:23.691 SYMLINK libspdk_lvol.so 00:03:23.691 CC lib/ftl/ftl_l2p_flat.o 00:03:23.691 SO libspdk_nbd.so.7.0 00:03:23.691 CC lib/ublk/ublk_rpc.o 00:03:23.691 CC lib/scsi/scsi_pr.o 00:03:23.691 SYMLINK libspdk_nbd.so 00:03:23.691 CC lib/ftl/ftl_nv_cache.o 00:03:23.949 CC lib/nvmf/nvmf_rpc.o 00:03:23.949 CC lib/nvmf/transport.o 00:03:23.949 LIB libspdk_ublk.a 00:03:23.949 SO libspdk_ublk.so.3.0 00:03:23.949 CC lib/nvmf/tcp.o 00:03:24.207 SYMLINK libspdk_ublk.so 00:03:24.207 CC lib/ftl/ftl_band.o 00:03:24.207 CC lib/scsi/scsi_rpc.o 00:03:24.464 CC lib/scsi/task.o 00:03:24.464 CC lib/nvmf/stubs.o 00:03:24.464 CC lib/ftl/ftl_band_ops.o 00:03:24.722 LIB libspdk_scsi.a 00:03:24.722 SO libspdk_scsi.so.9.0 00:03:24.722 CC lib/nvmf/mdns_server.o 00:03:24.722 SYMLINK libspdk_scsi.so 00:03:24.722 CC lib/nvmf/vfio_user.o 00:03:24.722 CC lib/ftl/ftl_writer.o 00:03:24.980 CC lib/nvmf/rdma.o 00:03:24.980 CC lib/ftl/ftl_rq.o 00:03:25.238 CC lib/nvmf/auth.o 00:03:25.238 CC lib/ftl/ftl_reloc.o 00:03:25.238 CC lib/iscsi/conn.o 00:03:25.238 CC lib/ftl/ftl_l2p_cache.o 00:03:25.238 CC lib/iscsi/init_grp.o 00:03:25.238 CC lib/ftl/ftl_p2l.o 00:03:25.496 CC lib/vhost/vhost.o 00:03:25.496 CC lib/iscsi/iscsi.o 00:03:25.496 CC lib/ftl/ftl_p2l_log.o 00:03:25.754 CC lib/vhost/vhost_rpc.o 00:03:26.012 CC lib/vhost/vhost_scsi.o 00:03:26.012 CC lib/iscsi/param.o 00:03:26.012 CC lib/ftl/mngt/ftl_mngt.o 00:03:26.012 CC lib/iscsi/portal_grp.o 00:03:26.012 CC lib/iscsi/tgt_node.o 00:03:26.270 CC lib/iscsi/iscsi_subsystem.o 00:03:26.270 CC lib/iscsi/iscsi_rpc.o 00:03:26.270 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:26.528 CC lib/iscsi/task.o 00:03:26.528 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:26.528 CC lib/vhost/vhost_blk.o 00:03:26.786 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:26.786 CC lib/vhost/rte_vhost_user.o 00:03:26.786 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:26.786 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:26.786 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:26.786 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:27.044 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:27.044 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:27.044 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:27.044 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:27.044 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:27.044 CC lib/ftl/utils/ftl_conf.o 00:03:27.301 CC lib/ftl/utils/ftl_md.o 00:03:27.301 CC lib/ftl/utils/ftl_mempool.o 00:03:27.301 CC lib/ftl/utils/ftl_bitmap.o 00:03:27.301 CC lib/ftl/utils/ftl_property.o 00:03:27.301 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:27.301 LIB libspdk_iscsi.a 00:03:27.559 SO libspdk_iscsi.so.8.0 00:03:27.559 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:27.559 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:27.559 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:27.559 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:27.818 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:27.818 SYMLINK libspdk_iscsi.so 00:03:27.818 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:27.818 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:27.818 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:27.818 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:27.818 LIB libspdk_nvmf.a 00:03:27.818 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:27.818 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:27.818 LIB libspdk_vhost.a 00:03:27.818 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:27.818 CC lib/ftl/base/ftl_base_dev.o 00:03:27.818 CC lib/ftl/base/ftl_base_bdev.o 00:03:28.076 CC lib/ftl/ftl_trace.o 00:03:28.076 SO libspdk_nvmf.so.19.0 00:03:28.076 SO libspdk_vhost.so.8.0 00:03:28.076 SYMLINK libspdk_vhost.so 00:03:28.335 LIB libspdk_ftl.a 00:03:28.335 SYMLINK libspdk_nvmf.so 00:03:28.593 SO libspdk_ftl.so.9.0 00:03:28.852 SYMLINK libspdk_ftl.so 00:03:29.110 CC module/env_dpdk/env_dpdk_rpc.o 00:03:29.110 CC module/vfu_device/vfu_virtio.o 00:03:29.110 CC module/blob/bdev/blob_bdev.o 00:03:29.110 CC module/accel/dsa/accel_dsa.o 00:03:29.110 CC module/accel/error/accel_error.o 00:03:29.110 CC module/keyring/file/keyring.o 00:03:29.110 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:29.110 CC module/sock/posix/posix.o 00:03:29.110 CC module/accel/ioat/accel_ioat.o 00:03:29.110 CC module/fsdev/aio/fsdev_aio.o 00:03:29.369 LIB libspdk_env_dpdk_rpc.a 00:03:29.369 SO libspdk_env_dpdk_rpc.so.6.0 00:03:29.369 SYMLINK libspdk_env_dpdk_rpc.so 00:03:29.369 CC module/accel/dsa/accel_dsa_rpc.o 00:03:29.369 CC module/keyring/file/keyring_rpc.o 00:03:29.369 CC module/accel/error/accel_error_rpc.o 00:03:29.369 CC module/accel/ioat/accel_ioat_rpc.o 00:03:29.369 LIB libspdk_scheduler_dynamic.a 00:03:29.369 SO libspdk_scheduler_dynamic.so.4.0 00:03:29.369 LIB libspdk_keyring_file.a 00:03:29.627 SYMLINK libspdk_scheduler_dynamic.so 00:03:29.627 LIB libspdk_blob_bdev.a 00:03:29.627 LIB libspdk_accel_dsa.a 00:03:29.627 SO libspdk_keyring_file.so.2.0 00:03:29.627 SO libspdk_blob_bdev.so.11.0 00:03:29.627 LIB libspdk_accel_error.a 00:03:29.627 SO libspdk_accel_dsa.so.5.0 00:03:29.627 LIB libspdk_accel_ioat.a 00:03:29.627 SO libspdk_accel_error.so.2.0 00:03:29.627 SO libspdk_accel_ioat.so.6.0 00:03:29.627 SYMLINK libspdk_keyring_file.so 00:03:29.627 SYMLINK libspdk_blob_bdev.so 00:03:29.627 SYMLINK libspdk_accel_dsa.so 00:03:29.627 SYMLINK libspdk_accel_error.so 00:03:29.627 CC module/vfu_device/vfu_virtio_blk.o 00:03:29.627 CC module/accel/iaa/accel_iaa.o 00:03:29.627 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:29.627 SYMLINK libspdk_accel_ioat.so 00:03:29.627 CC module/vfu_device/vfu_virtio_scsi.o 00:03:29.885 CC module/keyring/linux/keyring.o 00:03:29.885 LIB libspdk_scheduler_dpdk_governor.a 00:03:29.885 CC module/bdev/delay/vbdev_delay.o 00:03:29.885 CC module/accel/iaa/accel_iaa_rpc.o 00:03:29.885 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:29.885 CC module/blobfs/bdev/blobfs_bdev.o 00:03:29.885 CC module/vfu_device/vfu_virtio_rpc.o 00:03:29.885 CC module/keyring/linux/keyring_rpc.o 00:03:29.885 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:29.885 CC module/vfu_device/vfu_virtio_fs.o 00:03:30.143 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:30.143 LIB libspdk_accel_iaa.a 00:03:30.143 CC module/fsdev/aio/linux_aio_mgr.o 00:03:30.143 LIB libspdk_keyring_linux.a 00:03:30.143 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:30.143 SO libspdk_accel_iaa.so.3.0 00:03:30.143 SO libspdk_keyring_linux.so.1.0 00:03:30.143 LIB libspdk_sock_posix.a 00:03:30.143 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:30.143 CC module/scheduler/gscheduler/gscheduler.o 00:03:30.143 SO libspdk_sock_posix.so.6.0 00:03:30.143 SYMLINK libspdk_accel_iaa.so 00:03:30.143 SYMLINK libspdk_keyring_linux.so 00:03:30.143 SYMLINK libspdk_sock_posix.so 00:03:30.401 LIB libspdk_vfu_device.a 00:03:30.401 LIB libspdk_blobfs_bdev.a 00:03:30.401 SO libspdk_vfu_device.so.3.0 00:03:30.401 LIB libspdk_fsdev_aio.a 00:03:30.401 LIB libspdk_scheduler_gscheduler.a 00:03:30.401 SO libspdk_blobfs_bdev.so.6.0 00:03:30.401 LIB libspdk_bdev_delay.a 00:03:30.401 SO libspdk_scheduler_gscheduler.so.4.0 00:03:30.401 SO libspdk_fsdev_aio.so.1.0 00:03:30.401 SO libspdk_bdev_delay.so.6.0 00:03:30.401 CC module/bdev/error/vbdev_error.o 00:03:30.401 CC module/bdev/gpt/gpt.o 00:03:30.401 CC module/sock/uring/uring.o 00:03:30.401 SYMLINK libspdk_blobfs_bdev.so 00:03:30.401 CC module/bdev/lvol/vbdev_lvol.o 00:03:30.401 SYMLINK libspdk_vfu_device.so 00:03:30.401 SYMLINK libspdk_scheduler_gscheduler.so 00:03:30.401 CC module/bdev/error/vbdev_error_rpc.o 00:03:30.401 CC module/bdev/gpt/vbdev_gpt.o 00:03:30.401 SYMLINK libspdk_bdev_delay.so 00:03:30.401 SYMLINK libspdk_fsdev_aio.so 00:03:30.401 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:30.401 CC module/bdev/malloc/bdev_malloc.o 00:03:30.659 CC module/bdev/null/bdev_null.o 00:03:30.659 CC module/bdev/nvme/bdev_nvme.o 00:03:30.659 LIB libspdk_bdev_error.a 00:03:30.659 SO libspdk_bdev_error.so.6.0 00:03:30.918 CC module/bdev/passthru/vbdev_passthru.o 00:03:30.918 LIB libspdk_bdev_gpt.a 00:03:30.918 CC module/bdev/raid/bdev_raid.o 00:03:30.918 SO libspdk_bdev_gpt.so.6.0 00:03:30.918 SYMLINK libspdk_bdev_error.so 00:03:30.918 CC module/bdev/raid/bdev_raid_rpc.o 00:03:30.918 SYMLINK libspdk_bdev_gpt.so 00:03:30.918 CC module/bdev/raid/bdev_raid_sb.o 00:03:30.918 CC module/bdev/null/bdev_null_rpc.o 00:03:30.918 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:31.178 LIB libspdk_bdev_lvol.a 00:03:31.178 CC module/bdev/raid/raid0.o 00:03:31.178 SO libspdk_bdev_lvol.so.6.0 00:03:31.178 CC module/bdev/split/vbdev_split.o 00:03:31.178 LIB libspdk_bdev_null.a 00:03:31.178 SO libspdk_bdev_null.so.6.0 00:03:31.178 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:31.178 LIB libspdk_bdev_malloc.a 00:03:31.178 SYMLINK libspdk_bdev_lvol.so 00:03:31.178 SO libspdk_bdev_malloc.so.6.0 00:03:31.178 SYMLINK libspdk_bdev_null.so 00:03:31.178 CC module/bdev/raid/raid1.o 00:03:31.437 SYMLINK libspdk_bdev_malloc.so 00:03:31.437 LIB libspdk_bdev_passthru.a 00:03:31.437 CC module/bdev/split/vbdev_split_rpc.o 00:03:31.437 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:31.437 SO libspdk_bdev_passthru.so.6.0 00:03:31.437 LIB libspdk_sock_uring.a 00:03:31.437 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:31.437 SO libspdk_sock_uring.so.5.0 00:03:31.437 SYMLINK libspdk_bdev_passthru.so 00:03:31.437 CC module/bdev/uring/bdev_uring.o 00:03:31.437 CC module/bdev/uring/bdev_uring_rpc.o 00:03:31.437 SYMLINK libspdk_sock_uring.so 00:03:31.437 CC module/bdev/aio/bdev_aio.o 00:03:31.697 LIB libspdk_bdev_split.a 00:03:31.697 SO libspdk_bdev_split.so.6.0 00:03:31.697 CC module/bdev/raid/concat.o 00:03:31.697 SYMLINK libspdk_bdev_split.so 00:03:31.697 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:31.697 CC module/bdev/ftl/bdev_ftl.o 00:03:31.697 CC module/bdev/iscsi/bdev_iscsi.o 00:03:31.697 LIB libspdk_bdev_zone_block.a 00:03:31.956 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:31.956 SO libspdk_bdev_zone_block.so.6.0 00:03:31.956 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:31.956 SYMLINK libspdk_bdev_zone_block.so 00:03:31.956 LIB libspdk_bdev_uring.a 00:03:31.956 CC module/bdev/nvme/nvme_rpc.o 00:03:31.956 CC module/bdev/aio/bdev_aio_rpc.o 00:03:31.956 SO libspdk_bdev_uring.so.6.0 00:03:31.956 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:31.957 SYMLINK libspdk_bdev_uring.so 00:03:31.957 CC module/bdev/nvme/bdev_mdns_client.o 00:03:32.216 LIB libspdk_bdev_raid.a 00:03:32.216 LIB libspdk_bdev_aio.a 00:03:32.216 SO libspdk_bdev_aio.so.6.0 00:03:32.216 SO libspdk_bdev_raid.so.6.0 00:03:32.216 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:32.216 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:32.216 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:32.216 CC module/bdev/nvme/vbdev_opal.o 00:03:32.216 SYMLINK libspdk_bdev_aio.so 00:03:32.216 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:32.216 SYMLINK libspdk_bdev_raid.so 00:03:32.216 LIB libspdk_bdev_ftl.a 00:03:32.216 SO libspdk_bdev_ftl.so.6.0 00:03:32.475 SYMLINK libspdk_bdev_ftl.so 00:03:32.475 LIB libspdk_bdev_iscsi.a 00:03:32.475 SO libspdk_bdev_iscsi.so.6.0 00:03:32.475 LIB libspdk_bdev_virtio.a 00:03:32.475 SYMLINK libspdk_bdev_iscsi.so 00:03:32.475 SO libspdk_bdev_virtio.so.6.0 00:03:32.734 SYMLINK libspdk_bdev_virtio.so 00:03:33.672 LIB libspdk_bdev_nvme.a 00:03:33.672 SO libspdk_bdev_nvme.so.7.0 00:03:33.672 SYMLINK libspdk_bdev_nvme.so 00:03:34.262 CC module/event/subsystems/iobuf/iobuf.o 00:03:34.262 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:34.262 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:34.262 CC module/event/subsystems/keyring/keyring.o 00:03:34.262 CC module/event/subsystems/fsdev/fsdev.o 00:03:34.262 CC module/event/subsystems/sock/sock.o 00:03:34.262 CC module/event/subsystems/vmd/vmd.o 00:03:34.262 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:34.262 CC module/event/subsystems/scheduler/scheduler.o 00:03:34.262 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:34.262 LIB libspdk_event_fsdev.a 00:03:34.262 LIB libspdk_event_keyring.a 00:03:34.262 LIB libspdk_event_scheduler.a 00:03:34.262 LIB libspdk_event_vhost_blk.a 00:03:34.262 SO libspdk_event_fsdev.so.1.0 00:03:34.262 SO libspdk_event_keyring.so.1.0 00:03:34.262 LIB libspdk_event_vmd.a 00:03:34.262 LIB libspdk_event_sock.a 00:03:34.262 SO libspdk_event_scheduler.so.4.0 00:03:34.262 SO libspdk_event_vhost_blk.so.3.0 00:03:34.535 LIB libspdk_event_vfu_tgt.a 00:03:34.535 SO libspdk_event_sock.so.5.0 00:03:34.535 SYMLINK libspdk_event_fsdev.so 00:03:34.535 LIB libspdk_event_iobuf.a 00:03:34.535 SO libspdk_event_vmd.so.6.0 00:03:34.535 SYMLINK libspdk_event_keyring.so 00:03:34.535 SYMLINK libspdk_event_scheduler.so 00:03:34.535 SYMLINK libspdk_event_vhost_blk.so 00:03:34.535 SO libspdk_event_vfu_tgt.so.3.0 00:03:34.535 SO libspdk_event_iobuf.so.3.0 00:03:34.535 SYMLINK libspdk_event_vmd.so 00:03:34.535 SYMLINK libspdk_event_sock.so 00:03:34.535 SYMLINK libspdk_event_vfu_tgt.so 00:03:34.535 SYMLINK libspdk_event_iobuf.so 00:03:34.795 CC module/event/subsystems/accel/accel.o 00:03:34.795 LIB libspdk_event_accel.a 00:03:35.053 SO libspdk_event_accel.so.6.0 00:03:35.053 SYMLINK libspdk_event_accel.so 00:03:35.313 CC module/event/subsystems/bdev/bdev.o 00:03:35.572 LIB libspdk_event_bdev.a 00:03:35.572 SO libspdk_event_bdev.so.6.0 00:03:35.572 SYMLINK libspdk_event_bdev.so 00:03:35.831 CC module/event/subsystems/nbd/nbd.o 00:03:35.831 CC module/event/subsystems/scsi/scsi.o 00:03:35.831 CC module/event/subsystems/ublk/ublk.o 00:03:35.831 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:35.831 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:35.831 LIB libspdk_event_ublk.a 00:03:36.090 LIB libspdk_event_nbd.a 00:03:36.090 LIB libspdk_event_scsi.a 00:03:36.090 SO libspdk_event_ublk.so.3.0 00:03:36.090 SO libspdk_event_nbd.so.6.0 00:03:36.090 SO libspdk_event_scsi.so.6.0 00:03:36.090 SYMLINK libspdk_event_ublk.so 00:03:36.090 SYMLINK libspdk_event_nbd.so 00:03:36.090 SYMLINK libspdk_event_scsi.so 00:03:36.090 LIB libspdk_event_nvmf.a 00:03:36.090 SO libspdk_event_nvmf.so.6.0 00:03:36.349 SYMLINK libspdk_event_nvmf.so 00:03:36.349 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:36.349 CC module/event/subsystems/iscsi/iscsi.o 00:03:36.607 LIB libspdk_event_vhost_scsi.a 00:03:36.607 SO libspdk_event_vhost_scsi.so.3.0 00:03:36.607 LIB libspdk_event_iscsi.a 00:03:36.607 SO libspdk_event_iscsi.so.6.0 00:03:36.607 SYMLINK libspdk_event_vhost_scsi.so 00:03:36.607 SYMLINK libspdk_event_iscsi.so 00:03:36.867 SO libspdk.so.6.0 00:03:36.867 SYMLINK libspdk.so 00:03:37.125 CC app/trace_record/trace_record.o 00:03:37.125 CXX app/trace/trace.o 00:03:37.126 CC app/spdk_lspci/spdk_lspci.o 00:03:37.126 CC app/spdk_nvme_perf/perf.o 00:03:37.126 CC app/nvmf_tgt/nvmf_main.o 00:03:37.126 CC app/iscsi_tgt/iscsi_tgt.o 00:03:37.126 CC examples/util/zipf/zipf.o 00:03:37.126 CC examples/ioat/perf/perf.o 00:03:37.126 CC test/thread/poller_perf/poller_perf.o 00:03:37.126 CC app/spdk_tgt/spdk_tgt.o 00:03:37.126 LINK spdk_lspci 00:03:37.385 LINK nvmf_tgt 00:03:37.385 LINK poller_perf 00:03:37.385 LINK zipf 00:03:37.385 LINK iscsi_tgt 00:03:37.385 LINK spdk_trace_record 00:03:37.385 LINK spdk_tgt 00:03:37.385 LINK ioat_perf 00:03:37.643 CC app/spdk_nvme_identify/identify.o 00:03:37.643 LINK spdk_trace 00:03:37.643 CC app/spdk_nvme_discover/discovery_aer.o 00:03:37.643 CC app/spdk_top/spdk_top.o 00:03:37.643 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:37.643 CC examples/ioat/verify/verify.o 00:03:37.643 CC test/dma/test_dma/test_dma.o 00:03:37.643 CC app/spdk_dd/spdk_dd.o 00:03:37.902 LINK spdk_nvme_discover 00:03:37.902 CC app/fio/nvme/fio_plugin.o 00:03:37.902 LINK interrupt_tgt 00:03:37.902 CC examples/thread/thread/thread_ex.o 00:03:37.902 LINK verify 00:03:38.160 CC app/fio/bdev/fio_plugin.o 00:03:38.160 LINK thread 00:03:38.418 LINK spdk_nvme_perf 00:03:38.418 LINK spdk_dd 00:03:38.418 CC examples/sock/hello_world/hello_sock.o 00:03:38.418 CC examples/vmd/lsvmd/lsvmd.o 00:03:38.418 LINK test_dma 00:03:38.418 LINK lsvmd 00:03:38.675 CC examples/vmd/led/led.o 00:03:38.675 LINK spdk_nvme_identify 00:03:38.675 LINK spdk_nvme 00:03:38.675 CC app/vhost/vhost.o 00:03:38.675 LINK hello_sock 00:03:38.675 LINK led 00:03:38.675 CC test/app/bdev_svc/bdev_svc.o 00:03:38.933 CC examples/idxd/perf/perf.o 00:03:38.933 LINK spdk_top 00:03:38.933 LINK spdk_bdev 00:03:38.933 CC test/app/histogram_perf/histogram_perf.o 00:03:38.933 CC test/app/jsoncat/jsoncat.o 00:03:38.933 LINK vhost 00:03:38.933 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:38.933 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:38.933 LINK histogram_perf 00:03:38.933 LINK bdev_svc 00:03:38.933 LINK jsoncat 00:03:38.933 CC test/app/stub/stub.o 00:03:39.192 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:39.192 CC examples/accel/perf/accel_perf.o 00:03:39.192 LINK idxd_perf 00:03:39.192 LINK stub 00:03:39.192 TEST_HEADER include/spdk/accel.h 00:03:39.192 TEST_HEADER include/spdk/accel_module.h 00:03:39.192 TEST_HEADER include/spdk/assert.h 00:03:39.192 TEST_HEADER include/spdk/barrier.h 00:03:39.192 TEST_HEADER include/spdk/base64.h 00:03:39.192 TEST_HEADER include/spdk/bdev.h 00:03:39.192 TEST_HEADER include/spdk/bdev_module.h 00:03:39.192 TEST_HEADER include/spdk/bdev_zone.h 00:03:39.192 TEST_HEADER include/spdk/bit_array.h 00:03:39.192 CC examples/blob/hello_world/hello_blob.o 00:03:39.192 TEST_HEADER include/spdk/bit_pool.h 00:03:39.192 TEST_HEADER include/spdk/blob_bdev.h 00:03:39.192 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:39.192 TEST_HEADER include/spdk/blobfs.h 00:03:39.192 TEST_HEADER include/spdk/blob.h 00:03:39.192 TEST_HEADER include/spdk/conf.h 00:03:39.192 TEST_HEADER include/spdk/config.h 00:03:39.192 TEST_HEADER include/spdk/cpuset.h 00:03:39.192 TEST_HEADER include/spdk/crc16.h 00:03:39.192 TEST_HEADER include/spdk/crc32.h 00:03:39.192 TEST_HEADER include/spdk/crc64.h 00:03:39.192 TEST_HEADER include/spdk/dif.h 00:03:39.192 TEST_HEADER include/spdk/dma.h 00:03:39.451 TEST_HEADER include/spdk/endian.h 00:03:39.451 TEST_HEADER include/spdk/env_dpdk.h 00:03:39.451 TEST_HEADER include/spdk/env.h 00:03:39.451 TEST_HEADER include/spdk/event.h 00:03:39.451 TEST_HEADER include/spdk/fd_group.h 00:03:39.451 TEST_HEADER include/spdk/fd.h 00:03:39.451 TEST_HEADER include/spdk/file.h 00:03:39.451 TEST_HEADER include/spdk/fsdev.h 00:03:39.451 TEST_HEADER include/spdk/fsdev_module.h 00:03:39.451 CC examples/nvme/hello_world/hello_world.o 00:03:39.451 TEST_HEADER include/spdk/ftl.h 00:03:39.451 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:39.451 TEST_HEADER include/spdk/gpt_spec.h 00:03:39.451 TEST_HEADER include/spdk/hexlify.h 00:03:39.451 TEST_HEADER include/spdk/histogram_data.h 00:03:39.451 TEST_HEADER include/spdk/idxd.h 00:03:39.451 TEST_HEADER include/spdk/idxd_spec.h 00:03:39.451 TEST_HEADER include/spdk/init.h 00:03:39.451 TEST_HEADER include/spdk/ioat.h 00:03:39.451 TEST_HEADER include/spdk/ioat_spec.h 00:03:39.451 TEST_HEADER include/spdk/iscsi_spec.h 00:03:39.451 TEST_HEADER include/spdk/json.h 00:03:39.451 TEST_HEADER include/spdk/jsonrpc.h 00:03:39.451 TEST_HEADER include/spdk/keyring.h 00:03:39.451 TEST_HEADER include/spdk/keyring_module.h 00:03:39.451 TEST_HEADER include/spdk/likely.h 00:03:39.451 TEST_HEADER include/spdk/log.h 00:03:39.451 TEST_HEADER include/spdk/lvol.h 00:03:39.451 TEST_HEADER include/spdk/md5.h 00:03:39.451 TEST_HEADER include/spdk/memory.h 00:03:39.451 TEST_HEADER include/spdk/mmio.h 00:03:39.451 TEST_HEADER include/spdk/nbd.h 00:03:39.451 TEST_HEADER include/spdk/net.h 00:03:39.451 TEST_HEADER include/spdk/notify.h 00:03:39.451 TEST_HEADER include/spdk/nvme.h 00:03:39.451 TEST_HEADER include/spdk/nvme_intel.h 00:03:39.451 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:39.451 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:39.451 TEST_HEADER include/spdk/nvme_spec.h 00:03:39.451 TEST_HEADER include/spdk/nvme_zns.h 00:03:39.451 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:39.451 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:39.451 TEST_HEADER include/spdk/nvmf.h 00:03:39.451 TEST_HEADER include/spdk/nvmf_spec.h 00:03:39.451 TEST_HEADER include/spdk/nvmf_transport.h 00:03:39.451 TEST_HEADER include/spdk/opal.h 00:03:39.451 TEST_HEADER include/spdk/opal_spec.h 00:03:39.451 TEST_HEADER include/spdk/pci_ids.h 00:03:39.451 LINK nvme_fuzz 00:03:39.451 TEST_HEADER include/spdk/pipe.h 00:03:39.451 TEST_HEADER include/spdk/queue.h 00:03:39.451 TEST_HEADER include/spdk/reduce.h 00:03:39.451 TEST_HEADER include/spdk/rpc.h 00:03:39.451 TEST_HEADER include/spdk/scheduler.h 00:03:39.451 TEST_HEADER include/spdk/scsi.h 00:03:39.451 TEST_HEADER include/spdk/scsi_spec.h 00:03:39.451 TEST_HEADER include/spdk/sock.h 00:03:39.451 TEST_HEADER include/spdk/stdinc.h 00:03:39.451 TEST_HEADER include/spdk/string.h 00:03:39.451 TEST_HEADER include/spdk/thread.h 00:03:39.451 TEST_HEADER include/spdk/trace.h 00:03:39.451 TEST_HEADER include/spdk/trace_parser.h 00:03:39.451 TEST_HEADER include/spdk/tree.h 00:03:39.451 TEST_HEADER include/spdk/ublk.h 00:03:39.451 CC test/env/mem_callbacks/mem_callbacks.o 00:03:39.451 TEST_HEADER include/spdk/util.h 00:03:39.451 TEST_HEADER include/spdk/uuid.h 00:03:39.451 TEST_HEADER include/spdk/version.h 00:03:39.451 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:39.451 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:39.451 TEST_HEADER include/spdk/vhost.h 00:03:39.451 TEST_HEADER include/spdk/vmd.h 00:03:39.451 TEST_HEADER include/spdk/xor.h 00:03:39.451 TEST_HEADER include/spdk/zipf.h 00:03:39.451 CXX test/cpp_headers/accel.o 00:03:39.451 LINK hello_fsdev 00:03:39.451 CC examples/blob/cli/blobcli.o 00:03:39.451 LINK hello_blob 00:03:39.451 CC test/event/event_perf/event_perf.o 00:03:39.711 CXX test/cpp_headers/accel_module.o 00:03:39.711 LINK hello_world 00:03:39.711 LINK event_perf 00:03:39.711 CXX test/cpp_headers/assert.o 00:03:39.711 CC examples/nvme/reconnect/reconnect.o 00:03:39.711 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:39.711 LINK accel_perf 00:03:39.969 CC test/event/reactor/reactor.o 00:03:39.969 CXX test/cpp_headers/barrier.o 00:03:39.969 CC test/nvme/aer/aer.o 00:03:39.969 CC examples/nvme/arbitration/arbitration.o 00:03:39.969 LINK reactor 00:03:39.969 LINK blobcli 00:03:39.969 LINK mem_callbacks 00:03:40.228 CC examples/nvme/hotplug/hotplug.o 00:03:40.228 CXX test/cpp_headers/base64.o 00:03:40.228 LINK reconnect 00:03:40.228 CC test/env/vtophys/vtophys.o 00:03:40.228 CXX test/cpp_headers/bdev.o 00:03:40.228 CC test/event/reactor_perf/reactor_perf.o 00:03:40.228 LINK aer 00:03:40.486 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:40.486 CXX test/cpp_headers/bdev_module.o 00:03:40.486 LINK arbitration 00:03:40.486 LINK hotplug 00:03:40.486 LINK nvme_manage 00:03:40.486 LINK vtophys 00:03:40.486 LINK reactor_perf 00:03:40.486 LINK env_dpdk_post_init 00:03:40.744 CXX test/cpp_headers/bdev_zone.o 00:03:40.744 CC test/nvme/reset/reset.o 00:03:40.744 CXX test/cpp_headers/bit_array.o 00:03:40.744 CC test/env/memory/memory_ut.o 00:03:40.744 CC test/env/pci/pci_ut.o 00:03:40.744 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:40.744 CC test/event/app_repeat/app_repeat.o 00:03:40.744 CC examples/bdev/hello_world/hello_bdev.o 00:03:40.744 CC examples/nvme/abort/abort.o 00:03:40.744 CXX test/cpp_headers/bit_pool.o 00:03:41.009 LINK cmb_copy 00:03:41.009 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:41.009 LINK reset 00:03:41.009 LINK app_repeat 00:03:41.009 CXX test/cpp_headers/blob_bdev.o 00:03:41.009 LINK iscsi_fuzz 00:03:41.009 CXX test/cpp_headers/blobfs_bdev.o 00:03:41.009 LINK hello_bdev 00:03:41.009 LINK pmr_persistence 00:03:41.268 LINK pci_ut 00:03:41.268 CC test/nvme/sgl/sgl.o 00:03:41.268 CC test/event/scheduler/scheduler.o 00:03:41.268 CXX test/cpp_headers/blobfs.o 00:03:41.268 LINK abort 00:03:41.268 CXX test/cpp_headers/blob.o 00:03:41.268 CC test/nvme/e2edp/nvme_dp.o 00:03:41.526 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:41.526 CXX test/cpp_headers/conf.o 00:03:41.526 CC examples/bdev/bdevperf/bdevperf.o 00:03:41.526 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:41.526 CXX test/cpp_headers/config.o 00:03:41.526 CXX test/cpp_headers/cpuset.o 00:03:41.526 LINK sgl 00:03:41.526 LINK scheduler 00:03:41.526 CXX test/cpp_headers/crc16.o 00:03:41.527 CC test/rpc_client/rpc_client_test.o 00:03:41.527 CXX test/cpp_headers/crc32.o 00:03:41.786 LINK nvme_dp 00:03:41.786 CC test/nvme/overhead/overhead.o 00:03:41.786 CXX test/cpp_headers/crc64.o 00:03:41.786 CC test/nvme/err_injection/err_injection.o 00:03:41.786 LINK rpc_client_test 00:03:41.786 CC test/nvme/startup/startup.o 00:03:41.786 CXX test/cpp_headers/dif.o 00:03:42.044 CC test/accel/dif/dif.o 00:03:42.044 LINK vhost_fuzz 00:03:42.044 LINK startup 00:03:42.044 CXX test/cpp_headers/dma.o 00:03:42.044 LINK err_injection 00:03:42.044 LINK memory_ut 00:03:42.044 LINK overhead 00:03:42.302 CC test/blobfs/mkfs/mkfs.o 00:03:42.302 CXX test/cpp_headers/endian.o 00:03:42.302 CC test/nvme/reserve/reserve.o 00:03:42.302 CXX test/cpp_headers/env_dpdk.o 00:03:42.302 CC test/lvol/esnap/esnap.o 00:03:42.302 CC test/nvme/simple_copy/simple_copy.o 00:03:42.302 CC test/nvme/connect_stress/connect_stress.o 00:03:42.302 CC test/nvme/boot_partition/boot_partition.o 00:03:42.560 CXX test/cpp_headers/env.o 00:03:42.560 LINK mkfs 00:03:42.560 LINK bdevperf 00:03:42.560 LINK reserve 00:03:42.560 CC test/nvme/compliance/nvme_compliance.o 00:03:42.560 LINK connect_stress 00:03:42.560 LINK simple_copy 00:03:42.560 LINK boot_partition 00:03:42.560 CXX test/cpp_headers/event.o 00:03:42.560 CXX test/cpp_headers/fd_group.o 00:03:42.819 CC test/nvme/fused_ordering/fused_ordering.o 00:03:42.819 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:42.819 CXX test/cpp_headers/fd.o 00:03:42.819 CXX test/cpp_headers/file.o 00:03:42.819 CC test/nvme/cuse/cuse.o 00:03:42.819 CC test/nvme/fdp/fdp.o 00:03:42.819 LINK dif 00:03:42.819 LINK nvme_compliance 00:03:42.819 CC examples/nvmf/nvmf/nvmf.o 00:03:43.077 CXX test/cpp_headers/fsdev.o 00:03:43.077 LINK fused_ordering 00:03:43.077 CXX test/cpp_headers/fsdev_module.o 00:03:43.077 LINK doorbell_aers 00:03:43.077 CXX test/cpp_headers/ftl.o 00:03:43.077 CXX test/cpp_headers/fuse_dispatcher.o 00:03:43.077 CXX test/cpp_headers/gpt_spec.o 00:03:43.077 CXX test/cpp_headers/hexlify.o 00:03:43.336 CXX test/cpp_headers/histogram_data.o 00:03:43.336 LINK fdp 00:03:43.336 LINK nvmf 00:03:43.336 CXX test/cpp_headers/idxd.o 00:03:43.336 CXX test/cpp_headers/idxd_spec.o 00:03:43.336 CXX test/cpp_headers/init.o 00:03:43.336 CXX test/cpp_headers/ioat.o 00:03:43.336 CXX test/cpp_headers/ioat_spec.o 00:03:43.336 CC test/bdev/bdevio/bdevio.o 00:03:43.336 CXX test/cpp_headers/iscsi_spec.o 00:03:43.593 CXX test/cpp_headers/json.o 00:03:43.593 CXX test/cpp_headers/jsonrpc.o 00:03:43.593 CXX test/cpp_headers/keyring.o 00:03:43.593 CXX test/cpp_headers/keyring_module.o 00:03:43.593 CXX test/cpp_headers/likely.o 00:03:43.593 CXX test/cpp_headers/log.o 00:03:43.593 CXX test/cpp_headers/lvol.o 00:03:43.593 CXX test/cpp_headers/md5.o 00:03:43.593 CXX test/cpp_headers/memory.o 00:03:43.593 CXX test/cpp_headers/mmio.o 00:03:43.593 CXX test/cpp_headers/nbd.o 00:03:43.850 CXX test/cpp_headers/net.o 00:03:43.850 CXX test/cpp_headers/notify.o 00:03:43.850 CXX test/cpp_headers/nvme.o 00:03:43.850 CXX test/cpp_headers/nvme_intel.o 00:03:43.850 LINK bdevio 00:03:43.850 CXX test/cpp_headers/nvme_ocssd.o 00:03:43.850 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:43.850 CXX test/cpp_headers/nvme_spec.o 00:03:43.850 CXX test/cpp_headers/nvme_zns.o 00:03:43.850 CXX test/cpp_headers/nvmf_cmd.o 00:03:43.850 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:44.109 CXX test/cpp_headers/nvmf.o 00:03:44.109 CXX test/cpp_headers/nvmf_spec.o 00:03:44.109 CXX test/cpp_headers/nvmf_transport.o 00:03:44.109 CXX test/cpp_headers/opal.o 00:03:44.109 CXX test/cpp_headers/opal_spec.o 00:03:44.109 CXX test/cpp_headers/pci_ids.o 00:03:44.109 CXX test/cpp_headers/pipe.o 00:03:44.109 CXX test/cpp_headers/queue.o 00:03:44.367 CXX test/cpp_headers/reduce.o 00:03:44.367 CXX test/cpp_headers/rpc.o 00:03:44.367 CXX test/cpp_headers/scheduler.o 00:03:44.367 CXX test/cpp_headers/scsi.o 00:03:44.368 CXX test/cpp_headers/scsi_spec.o 00:03:44.368 CXX test/cpp_headers/sock.o 00:03:44.368 CXX test/cpp_headers/stdinc.o 00:03:44.368 CXX test/cpp_headers/string.o 00:03:44.368 CXX test/cpp_headers/thread.o 00:03:44.368 LINK cuse 00:03:44.368 CXX test/cpp_headers/trace.o 00:03:44.368 CXX test/cpp_headers/trace_parser.o 00:03:44.626 CXX test/cpp_headers/tree.o 00:03:44.626 CXX test/cpp_headers/ublk.o 00:03:44.626 CXX test/cpp_headers/util.o 00:03:44.626 CXX test/cpp_headers/uuid.o 00:03:44.626 CXX test/cpp_headers/version.o 00:03:44.626 CXX test/cpp_headers/vfio_user_pci.o 00:03:44.626 CXX test/cpp_headers/vfio_user_spec.o 00:03:44.626 CXX test/cpp_headers/vhost.o 00:03:44.626 CXX test/cpp_headers/vmd.o 00:03:44.626 CXX test/cpp_headers/xor.o 00:03:44.626 CXX test/cpp_headers/zipf.o 00:03:48.809 LINK esnap 00:03:48.809 00:03:48.809 real 1m38.456s 00:03:48.809 user 9m26.971s 00:03:48.809 sys 1m36.816s 00:03:48.809 01:16:44 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:48.809 01:16:44 make -- common/autotest_common.sh@10 -- $ set +x 00:03:48.809 ************************************ 00:03:48.809 END TEST make 00:03:48.809 ************************************ 00:03:49.069 01:16:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:49.069 01:16:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:49.069 01:16:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:49.069 01:16:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.069 01:16:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:49.069 01:16:44 -- pm/common@44 -- $ pid=5288 00:03:49.069 01:16:44 -- pm/common@50 -- $ kill -TERM 5288 00:03:49.069 01:16:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.069 01:16:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:49.069 01:16:44 -- pm/common@44 -- $ pid=5290 00:03:49.069 01:16:44 -- pm/common@50 -- $ kill -TERM 5290 00:03:49.069 01:16:44 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:49.069 01:16:44 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:49.069 01:16:44 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:49.069 01:16:44 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:49.069 01:16:44 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.069 01:16:44 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.069 01:16:44 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.069 01:16:44 -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.069 01:16:44 -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.069 01:16:44 -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.069 01:16:44 -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.069 01:16:44 -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.069 01:16:44 -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.069 01:16:44 -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.069 01:16:44 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.069 01:16:44 -- scripts/common.sh@344 -- # case "$op" in 00:03:49.069 01:16:44 -- scripts/common.sh@345 -- # : 1 00:03:49.069 01:16:44 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.069 01:16:44 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.069 01:16:44 -- scripts/common.sh@365 -- # decimal 1 00:03:49.069 01:16:44 -- scripts/common.sh@353 -- # local d=1 00:03:49.069 01:16:44 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.069 01:16:44 -- scripts/common.sh@355 -- # echo 1 00:03:49.069 01:16:44 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.069 01:16:44 -- scripts/common.sh@366 -- # decimal 2 00:03:49.069 01:16:44 -- scripts/common.sh@353 -- # local d=2 00:03:49.069 01:16:44 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.069 01:16:44 -- scripts/common.sh@355 -- # echo 2 00:03:49.069 01:16:44 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.069 01:16:44 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.069 01:16:44 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.069 01:16:44 -- scripts/common.sh@368 -- # return 0 00:03:49.069 01:16:44 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.069 01:16:44 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:49.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.069 --rc genhtml_branch_coverage=1 00:03:49.069 --rc genhtml_function_coverage=1 00:03:49.069 --rc genhtml_legend=1 00:03:49.069 --rc geninfo_all_blocks=1 00:03:49.069 --rc geninfo_unexecuted_blocks=1 00:03:49.069 00:03:49.069 ' 00:03:49.069 01:16:44 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:49.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.069 --rc genhtml_branch_coverage=1 00:03:49.069 --rc genhtml_function_coverage=1 00:03:49.069 --rc genhtml_legend=1 00:03:49.069 --rc geninfo_all_blocks=1 00:03:49.069 --rc geninfo_unexecuted_blocks=1 00:03:49.069 00:03:49.069 ' 00:03:49.069 01:16:44 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:49.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.069 --rc genhtml_branch_coverage=1 00:03:49.069 --rc genhtml_function_coverage=1 00:03:49.069 --rc genhtml_legend=1 00:03:49.069 --rc geninfo_all_blocks=1 00:03:49.069 --rc geninfo_unexecuted_blocks=1 00:03:49.069 00:03:49.069 ' 00:03:49.069 01:16:44 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:49.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.069 --rc genhtml_branch_coverage=1 00:03:49.069 --rc genhtml_function_coverage=1 00:03:49.069 --rc genhtml_legend=1 00:03:49.069 --rc geninfo_all_blocks=1 00:03:49.069 --rc geninfo_unexecuted_blocks=1 00:03:49.069 00:03:49.069 ' 00:03:49.069 01:16:44 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:49.069 01:16:44 -- nvmf/common.sh@7 -- # uname -s 00:03:49.069 01:16:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:49.069 01:16:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:49.069 01:16:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:49.069 01:16:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:49.069 01:16:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:49.069 01:16:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:49.069 01:16:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:49.069 01:16:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:49.069 01:16:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:49.069 01:16:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:49.069 01:16:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:03:49.069 01:16:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:03:49.069 01:16:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:49.069 01:16:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:49.069 01:16:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:49.069 01:16:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:49.069 01:16:44 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:49.069 01:16:44 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:49.069 01:16:44 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:49.069 01:16:44 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:49.069 01:16:44 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:49.069 01:16:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.069 01:16:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.069 01:16:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.069 01:16:44 -- paths/export.sh@5 -- # export PATH 00:03:49.069 01:16:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.069 01:16:44 -- nvmf/common.sh@51 -- # : 0 00:03:49.069 01:16:44 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:49.069 01:16:44 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:49.069 01:16:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:49.069 01:16:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:49.069 01:16:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:49.069 01:16:44 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:49.069 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:49.070 01:16:44 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:49.070 01:16:44 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:49.070 01:16:44 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:49.070 01:16:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:49.070 01:16:44 -- spdk/autotest.sh@32 -- # uname -s 00:03:49.070 01:16:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:49.070 01:16:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:49.070 01:16:44 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:49.070 01:16:44 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:49.070 01:16:44 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:49.070 01:16:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:49.328 01:16:45 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:49.328 01:16:45 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:49.328 01:16:45 -- spdk/autotest.sh@48 -- # udevadm_pid=54994 00:03:49.328 01:16:45 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:49.328 01:16:45 -- pm/common@17 -- # local monitor 00:03:49.328 01:16:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.328 01:16:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.328 01:16:45 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:49.328 01:16:45 -- pm/common@25 -- # sleep 1 00:03:49.328 01:16:45 -- pm/common@21 -- # date +%s 00:03:49.328 01:16:45 -- pm/common@21 -- # date +%s 00:03:49.328 01:16:45 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727486205 00:03:49.328 01:16:45 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727486205 00:03:49.328 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727486205_collect-cpu-load.pm.log 00:03:49.328 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727486205_collect-vmstat.pm.log 00:03:50.264 01:16:46 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:50.264 01:16:46 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:50.264 01:16:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:50.264 01:16:46 -- common/autotest_common.sh@10 -- # set +x 00:03:50.264 01:16:46 -- spdk/autotest.sh@59 -- # create_test_list 00:03:50.264 01:16:46 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:50.264 01:16:46 -- common/autotest_common.sh@10 -- # set +x 00:03:50.264 01:16:46 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:50.264 01:16:46 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:50.264 01:16:46 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:50.264 01:16:46 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:50.264 01:16:46 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:50.264 01:16:46 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:50.264 01:16:46 -- common/autotest_common.sh@1455 -- # uname 00:03:50.264 01:16:46 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:50.264 01:16:46 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:50.264 01:16:46 -- common/autotest_common.sh@1475 -- # uname 00:03:50.264 01:16:46 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:50.264 01:16:46 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:50.264 01:16:46 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:50.264 lcov: LCOV version 1.15 00:03:50.523 01:16:46 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:05.494 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:05.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:20.373 01:17:16 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:20.373 01:17:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:20.373 01:17:16 -- common/autotest_common.sh@10 -- # set +x 00:04:20.373 01:17:16 -- spdk/autotest.sh@78 -- # rm -f 00:04:20.373 01:17:16 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.941 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.941 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:20.941 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:20.941 01:17:16 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:20.941 01:17:16 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:20.941 01:17:16 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:20.941 01:17:16 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:20.941 01:17:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:20.941 01:17:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:20.941 01:17:16 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:20.941 01:17:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:20.941 01:17:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:20.941 01:17:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:20.941 01:17:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:20.941 01:17:16 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:20.941 01:17:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:20.941 01:17:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:20.941 01:17:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:20.941 01:17:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:20.941 01:17:16 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:20.941 01:17:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:20.941 01:17:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:20.941 01:17:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:20.941 01:17:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:20.941 01:17:16 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:20.941 01:17:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:20.941 01:17:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:20.941 01:17:16 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:20.941 01:17:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.941 01:17:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.941 01:17:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:20.941 01:17:16 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:20.941 01:17:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:20.941 No valid GPT data, bailing 00:04:21.200 01:17:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:21.200 01:17:16 -- scripts/common.sh@394 -- # pt= 00:04:21.200 01:17:16 -- scripts/common.sh@395 -- # return 1 00:04:21.200 01:17:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:21.200 1+0 records in 00:04:21.200 1+0 records out 00:04:21.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00397173 s, 264 MB/s 00:04:21.200 01:17:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:21.200 01:17:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:21.200 01:17:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:21.200 01:17:16 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:21.200 01:17:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:21.200 No valid GPT data, bailing 00:04:21.200 01:17:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:21.200 01:17:16 -- scripts/common.sh@394 -- # pt= 00:04:21.200 01:17:16 -- scripts/common.sh@395 -- # return 1 00:04:21.200 01:17:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:21.200 1+0 records in 00:04:21.200 1+0 records out 00:04:21.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00536601 s, 195 MB/s 00:04:21.200 01:17:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:21.200 01:17:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:21.200 01:17:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:21.200 01:17:16 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:21.200 01:17:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:21.200 No valid GPT data, bailing 00:04:21.200 01:17:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:21.200 01:17:17 -- scripts/common.sh@394 -- # pt= 00:04:21.200 01:17:17 -- scripts/common.sh@395 -- # return 1 00:04:21.200 01:17:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:21.200 1+0 records in 00:04:21.200 1+0 records out 00:04:21.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458124 s, 229 MB/s 00:04:21.200 01:17:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:21.200 01:17:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:21.200 01:17:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:21.200 01:17:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:21.200 01:17:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:21.200 No valid GPT data, bailing 00:04:21.200 01:17:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:21.459 01:17:17 -- scripts/common.sh@394 -- # pt= 00:04:21.459 01:17:17 -- scripts/common.sh@395 -- # return 1 00:04:21.459 01:17:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:21.459 1+0 records in 00:04:21.459 1+0 records out 00:04:21.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00438734 s, 239 MB/s 00:04:21.459 01:17:17 -- spdk/autotest.sh@105 -- # sync 00:04:21.459 01:17:17 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:21.459 01:17:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:21.459 01:17:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:23.995 01:17:19 -- spdk/autotest.sh@111 -- # uname -s 00:04:23.995 01:17:19 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:23.995 01:17:19 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:23.995 01:17:19 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:24.254 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.254 Hugepages 00:04:24.254 node hugesize free / total 00:04:24.254 node0 1048576kB 0 / 0 00:04:24.254 node0 2048kB 0 / 0 00:04:24.254 00:04:24.254 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:24.254 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:24.254 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:24.513 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:24.513 01:17:20 -- spdk/autotest.sh@117 -- # uname -s 00:04:24.513 01:17:20 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:24.513 01:17:20 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:24.513 01:17:20 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.081 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.081 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.339 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.339 01:17:21 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:26.275 01:17:22 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:26.275 01:17:22 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:26.275 01:17:22 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:26.275 01:17:22 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:26.275 01:17:22 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:26.275 01:17:22 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:26.275 01:17:22 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:26.275 01:17:22 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:26.275 01:17:22 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:26.275 01:17:22 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:26.275 01:17:22 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:26.275 01:17:22 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:26.842 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.842 Waiting for block devices as requested 00:04:26.842 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:26.842 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:26.842 01:17:22 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:26.842 01:17:22 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:26.842 01:17:22 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:26.842 01:17:22 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:27.101 01:17:22 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:27.101 01:17:22 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:27.101 01:17:22 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:27.101 01:17:22 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:27.101 01:17:22 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:27.101 01:17:22 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:27.101 01:17:22 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:27.101 01:17:22 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:27.101 01:17:22 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:27.101 01:17:22 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:27.101 01:17:22 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:27.101 01:17:22 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:27.101 01:17:22 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:27.101 01:17:22 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:27.101 01:17:22 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:27.101 01:17:22 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:27.101 01:17:22 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:27.101 01:17:22 -- common/autotest_common.sh@1541 -- # continue 00:04:27.101 01:17:22 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:27.101 01:17:22 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:27.101 01:17:22 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:27.101 01:17:22 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:27.101 01:17:22 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:27.101 01:17:22 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:27.101 01:17:22 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:27.101 01:17:22 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:27.101 01:17:22 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:27.101 01:17:22 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:27.101 01:17:22 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:27.101 01:17:22 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:27.101 01:17:22 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:27.101 01:17:22 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:27.101 01:17:22 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:27.101 01:17:22 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:27.101 01:17:22 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:27.101 01:17:22 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:27.101 01:17:22 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:27.101 01:17:22 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:27.101 01:17:22 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:27.101 01:17:22 -- common/autotest_common.sh@1541 -- # continue 00:04:27.101 01:17:22 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:27.101 01:17:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.101 01:17:22 -- common/autotest_common.sh@10 -- # set +x 00:04:27.101 01:17:22 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:27.101 01:17:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:27.101 01:17:22 -- common/autotest_common.sh@10 -- # set +x 00:04:27.101 01:17:22 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.668 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.927 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.927 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.927 01:17:23 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:27.927 01:17:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.927 01:17:23 -- common/autotest_common.sh@10 -- # set +x 00:04:27.927 01:17:23 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:27.927 01:17:23 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:27.927 01:17:23 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:27.927 01:17:23 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:27.927 01:17:23 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:27.927 01:17:23 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:27.927 01:17:23 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:27.927 01:17:23 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:27.927 01:17:23 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:27.927 01:17:23 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:27.927 01:17:23 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:27.927 01:17:23 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:27.927 01:17:23 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:27.927 01:17:23 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:27.927 01:17:23 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:27.927 01:17:23 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:27.927 01:17:23 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:27.927 01:17:23 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:27.927 01:17:23 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:27.927 01:17:23 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:27.927 01:17:23 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:28.186 01:17:23 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:28.186 01:17:23 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:28.186 01:17:23 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:28.186 01:17:23 -- common/autotest_common.sh@1570 -- # return 0 00:04:28.186 01:17:23 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:28.186 01:17:23 -- common/autotest_common.sh@1578 -- # return 0 00:04:28.186 01:17:23 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:28.186 01:17:23 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:28.186 01:17:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:28.186 01:17:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:28.186 01:17:23 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:28.186 01:17:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:28.186 01:17:23 -- common/autotest_common.sh@10 -- # set +x 00:04:28.186 01:17:23 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:28.186 01:17:23 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:28.186 01:17:23 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:28.186 01:17:23 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:28.186 01:17:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.186 01:17:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.186 01:17:23 -- common/autotest_common.sh@10 -- # set +x 00:04:28.186 ************************************ 00:04:28.186 START TEST env 00:04:28.186 ************************************ 00:04:28.186 01:17:23 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:28.186 * Looking for test storage... 00:04:28.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:28.186 01:17:23 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:28.186 01:17:23 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:28.186 01:17:23 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:28.186 01:17:24 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:28.186 01:17:24 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.186 01:17:24 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.186 01:17:24 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.186 01:17:24 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.186 01:17:24 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.186 01:17:24 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.186 01:17:24 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.186 01:17:24 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.186 01:17:24 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.186 01:17:24 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.186 01:17:24 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.186 01:17:24 env -- scripts/common.sh@344 -- # case "$op" in 00:04:28.186 01:17:24 env -- scripts/common.sh@345 -- # : 1 00:04:28.186 01:17:24 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.186 01:17:24 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.186 01:17:24 env -- scripts/common.sh@365 -- # decimal 1 00:04:28.186 01:17:24 env -- scripts/common.sh@353 -- # local d=1 00:04:28.186 01:17:24 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.186 01:17:24 env -- scripts/common.sh@355 -- # echo 1 00:04:28.186 01:17:24 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.186 01:17:24 env -- scripts/common.sh@366 -- # decimal 2 00:04:28.186 01:17:24 env -- scripts/common.sh@353 -- # local d=2 00:04:28.186 01:17:24 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.186 01:17:24 env -- scripts/common.sh@355 -- # echo 2 00:04:28.186 01:17:24 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.186 01:17:24 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.186 01:17:24 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.186 01:17:24 env -- scripts/common.sh@368 -- # return 0 00:04:28.186 01:17:24 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.186 01:17:24 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:28.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.186 --rc genhtml_branch_coverage=1 00:04:28.186 --rc genhtml_function_coverage=1 00:04:28.186 --rc genhtml_legend=1 00:04:28.186 --rc geninfo_all_blocks=1 00:04:28.186 --rc geninfo_unexecuted_blocks=1 00:04:28.186 00:04:28.186 ' 00:04:28.186 01:17:24 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:28.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.186 --rc genhtml_branch_coverage=1 00:04:28.186 --rc genhtml_function_coverage=1 00:04:28.186 --rc genhtml_legend=1 00:04:28.186 --rc geninfo_all_blocks=1 00:04:28.186 --rc geninfo_unexecuted_blocks=1 00:04:28.186 00:04:28.186 ' 00:04:28.186 01:17:24 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:28.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.186 --rc genhtml_branch_coverage=1 00:04:28.186 --rc genhtml_function_coverage=1 00:04:28.186 --rc genhtml_legend=1 00:04:28.186 --rc geninfo_all_blocks=1 00:04:28.186 --rc geninfo_unexecuted_blocks=1 00:04:28.186 00:04:28.186 ' 00:04:28.186 01:17:24 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:28.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.186 --rc genhtml_branch_coverage=1 00:04:28.186 --rc genhtml_function_coverage=1 00:04:28.186 --rc genhtml_legend=1 00:04:28.186 --rc geninfo_all_blocks=1 00:04:28.186 --rc geninfo_unexecuted_blocks=1 00:04:28.186 00:04:28.186 ' 00:04:28.186 01:17:24 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:28.186 01:17:24 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.186 01:17:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.186 01:17:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.186 ************************************ 00:04:28.186 START TEST env_memory 00:04:28.186 ************************************ 00:04:28.186 01:17:24 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:28.186 00:04:28.186 00:04:28.186 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.186 http://cunit.sourceforge.net/ 00:04:28.186 00:04:28.186 00:04:28.186 Suite: memory 00:04:28.444 Test: alloc and free memory map ...[2024-09-28 01:17:24.154273] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:28.444 passed 00:04:28.444 Test: mem map translation ...[2024-09-28 01:17:24.215007] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:28.444 [2024-09-28 01:17:24.215225] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:28.444 [2024-09-28 01:17:24.215494] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:28.444 [2024-09-28 01:17:24.215753] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:28.444 passed 00:04:28.444 Test: mem map registration ...[2024-09-28 01:17:24.320458] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:28.444 [2024-09-28 01:17:24.320536] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:28.444 passed 00:04:28.703 Test: mem map adjacent registrations ...passed 00:04:28.703 00:04:28.703 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.703 suites 1 1 n/a 0 0 00:04:28.703 tests 4 4 4 0 0 00:04:28.703 asserts 152 152 152 0 n/a 00:04:28.703 00:04:28.703 Elapsed time = 0.353 seconds 00:04:28.703 00:04:28.703 real 0m0.396s 00:04:28.703 user 0m0.365s 00:04:28.703 sys 0m0.021s 00:04:28.703 ************************************ 00:04:28.703 END TEST env_memory 00:04:28.703 ************************************ 00:04:28.703 01:17:24 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.703 01:17:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:28.703 01:17:24 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:28.703 01:17:24 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.703 01:17:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.703 01:17:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.703 ************************************ 00:04:28.703 START TEST env_vtophys 00:04:28.703 ************************************ 00:04:28.703 01:17:24 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:28.703 EAL: lib.eal log level changed from notice to debug 00:04:28.703 EAL: Detected lcore 0 as core 0 on socket 0 00:04:28.703 EAL: Detected lcore 1 as core 0 on socket 0 00:04:28.703 EAL: Detected lcore 2 as core 0 on socket 0 00:04:28.703 EAL: Detected lcore 3 as core 0 on socket 0 00:04:28.703 EAL: Detected lcore 4 as core 0 on socket 0 00:04:28.703 EAL: Detected lcore 5 as core 0 on socket 0 00:04:28.703 EAL: Detected lcore 6 as core 0 on socket 0 00:04:28.703 EAL: Detected lcore 7 as core 0 on socket 0 00:04:28.703 EAL: Detected lcore 8 as core 0 on socket 0 00:04:28.703 EAL: Detected lcore 9 as core 0 on socket 0 00:04:28.703 EAL: Maximum logical cores by configuration: 128 00:04:28.703 EAL: Detected CPU lcores: 10 00:04:28.703 EAL: Detected NUMA nodes: 1 00:04:28.703 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:28.703 EAL: Detected shared linkage of DPDK 00:04:28.703 EAL: No shared files mode enabled, IPC will be disabled 00:04:28.703 EAL: Selected IOVA mode 'PA' 00:04:28.703 EAL: Probing VFIO support... 00:04:28.703 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:28.703 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:28.703 EAL: Ask a virtual area of 0x2e000 bytes 00:04:28.703 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:28.703 EAL: Setting up physically contiguous memory... 00:04:28.703 EAL: Setting maximum number of open files to 524288 00:04:28.703 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:28.703 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:28.703 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.703 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:28.703 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.703 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.703 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:28.703 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:28.703 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.703 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:28.703 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.703 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.703 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:28.703 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:28.703 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.703 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:28.703 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.703 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.703 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:28.703 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:28.703 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.703 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:28.703 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.703 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.703 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:28.703 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:28.703 EAL: Hugepages will be freed exactly as allocated. 00:04:28.703 EAL: No shared files mode enabled, IPC is disabled 00:04:28.703 EAL: No shared files mode enabled, IPC is disabled 00:04:28.975 EAL: TSC frequency is ~2200000 KHz 00:04:28.975 EAL: Main lcore 0 is ready (tid=7f912a6dea40;cpuset=[0]) 00:04:28.975 EAL: Trying to obtain current memory policy. 00:04:28.975 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.975 EAL: Restoring previous memory policy: 0 00:04:28.975 EAL: request: mp_malloc_sync 00:04:28.975 EAL: No shared files mode enabled, IPC is disabled 00:04:28.975 EAL: Heap on socket 0 was expanded by 2MB 00:04:28.975 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:28.975 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:28.975 EAL: Mem event callback 'spdk:(nil)' registered 00:04:28.975 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:28.975 00:04:28.975 00:04:28.975 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.975 http://cunit.sourceforge.net/ 00:04:28.975 00:04:28.975 00:04:28.975 Suite: components_suite 00:04:29.245 Test: vtophys_malloc_test ...passed 00:04:29.245 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:29.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.245 EAL: Restoring previous memory policy: 4 00:04:29.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.245 EAL: request: mp_malloc_sync 00:04:29.245 EAL: No shared files mode enabled, IPC is disabled 00:04:29.245 EAL: Heap on socket 0 was expanded by 4MB 00:04:29.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.245 EAL: request: mp_malloc_sync 00:04:29.245 EAL: No shared files mode enabled, IPC is disabled 00:04:29.245 EAL: Heap on socket 0 was shrunk by 4MB 00:04:29.245 EAL: Trying to obtain current memory policy. 00:04:29.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.245 EAL: Restoring previous memory policy: 4 00:04:29.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.245 EAL: request: mp_malloc_sync 00:04:29.245 EAL: No shared files mode enabled, IPC is disabled 00:04:29.245 EAL: Heap on socket 0 was expanded by 6MB 00:04:29.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.245 EAL: request: mp_malloc_sync 00:04:29.245 EAL: No shared files mode enabled, IPC is disabled 00:04:29.245 EAL: Heap on socket 0 was shrunk by 6MB 00:04:29.245 EAL: Trying to obtain current memory policy. 00:04:29.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.246 EAL: Restoring previous memory policy: 4 00:04:29.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.246 EAL: request: mp_malloc_sync 00:04:29.246 EAL: No shared files mode enabled, IPC is disabled 00:04:29.246 EAL: Heap on socket 0 was expanded by 10MB 00:04:29.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.246 EAL: request: mp_malloc_sync 00:04:29.246 EAL: No shared files mode enabled, IPC is disabled 00:04:29.246 EAL: Heap on socket 0 was shrunk by 10MB 00:04:29.246 EAL: Trying to obtain current memory policy. 00:04:29.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.246 EAL: Restoring previous memory policy: 4 00:04:29.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.246 EAL: request: mp_malloc_sync 00:04:29.246 EAL: No shared files mode enabled, IPC is disabled 00:04:29.246 EAL: Heap on socket 0 was expanded by 18MB 00:04:29.504 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.504 EAL: request: mp_malloc_sync 00:04:29.504 EAL: No shared files mode enabled, IPC is disabled 00:04:29.504 EAL: Heap on socket 0 was shrunk by 18MB 00:04:29.504 EAL: Trying to obtain current memory policy. 00:04:29.504 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.504 EAL: Restoring previous memory policy: 4 00:04:29.504 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.504 EAL: request: mp_malloc_sync 00:04:29.504 EAL: No shared files mode enabled, IPC is disabled 00:04:29.504 EAL: Heap on socket 0 was expanded by 34MB 00:04:29.504 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.504 EAL: request: mp_malloc_sync 00:04:29.504 EAL: No shared files mode enabled, IPC is disabled 00:04:29.504 EAL: Heap on socket 0 was shrunk by 34MB 00:04:29.505 EAL: Trying to obtain current memory policy. 00:04:29.505 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.505 EAL: Restoring previous memory policy: 4 00:04:29.505 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.505 EAL: request: mp_malloc_sync 00:04:29.505 EAL: No shared files mode enabled, IPC is disabled 00:04:29.505 EAL: Heap on socket 0 was expanded by 66MB 00:04:29.505 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.505 EAL: request: mp_malloc_sync 00:04:29.505 EAL: No shared files mode enabled, IPC is disabled 00:04:29.505 EAL: Heap on socket 0 was shrunk by 66MB 00:04:29.763 EAL: Trying to obtain current memory policy. 00:04:29.763 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.763 EAL: Restoring previous memory policy: 4 00:04:29.763 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.763 EAL: request: mp_malloc_sync 00:04:29.763 EAL: No shared files mode enabled, IPC is disabled 00:04:29.763 EAL: Heap on socket 0 was expanded by 130MB 00:04:29.763 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.763 EAL: request: mp_malloc_sync 00:04:29.763 EAL: No shared files mode enabled, IPC is disabled 00:04:29.763 EAL: Heap on socket 0 was shrunk by 130MB 00:04:30.031 EAL: Trying to obtain current memory policy. 00:04:30.031 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.031 EAL: Restoring previous memory policy: 4 00:04:30.031 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.031 EAL: request: mp_malloc_sync 00:04:30.031 EAL: No shared files mode enabled, IPC is disabled 00:04:30.031 EAL: Heap on socket 0 was expanded by 258MB 00:04:30.292 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.292 EAL: request: mp_malloc_sync 00:04:30.292 EAL: No shared files mode enabled, IPC is disabled 00:04:30.292 EAL: Heap on socket 0 was shrunk by 258MB 00:04:30.859 EAL: Trying to obtain current memory policy. 00:04:30.859 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.859 EAL: Restoring previous memory policy: 4 00:04:30.859 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.859 EAL: request: mp_malloc_sync 00:04:30.859 EAL: No shared files mode enabled, IPC is disabled 00:04:30.859 EAL: Heap on socket 0 was expanded by 514MB 00:04:31.428 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.428 EAL: request: mp_malloc_sync 00:04:31.428 EAL: No shared files mode enabled, IPC is disabled 00:04:31.428 EAL: Heap on socket 0 was shrunk by 514MB 00:04:31.995 EAL: Trying to obtain current memory policy. 00:04:31.995 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.254 EAL: Restoring previous memory policy: 4 00:04:32.254 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.254 EAL: request: mp_malloc_sync 00:04:32.254 EAL: No shared files mode enabled, IPC is disabled 00:04:32.254 EAL: Heap on socket 0 was expanded by 1026MB 00:04:33.631 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.631 EAL: request: mp_malloc_sync 00:04:33.631 EAL: No shared files mode enabled, IPC is disabled 00:04:33.631 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:35.008 passed 00:04:35.008 00:04:35.008 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.008 suites 1 1 n/a 0 0 00:04:35.008 tests 2 2 2 0 0 00:04:35.008 asserts 5649 5649 5649 0 n/a 00:04:35.008 00:04:35.008 Elapsed time = 5.884 seconds 00:04:35.008 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.008 EAL: request: mp_malloc_sync 00:04:35.008 EAL: No shared files mode enabled, IPC is disabled 00:04:35.008 EAL: Heap on socket 0 was shrunk by 2MB 00:04:35.008 EAL: No shared files mode enabled, IPC is disabled 00:04:35.008 EAL: No shared files mode enabled, IPC is disabled 00:04:35.008 EAL: No shared files mode enabled, IPC is disabled 00:04:35.008 00:04:35.008 real 0m6.199s 00:04:35.008 user 0m5.327s 00:04:35.008 sys 0m0.716s 00:04:35.008 01:17:30 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.008 ************************************ 00:04:35.008 END TEST env_vtophys 00:04:35.008 01:17:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:35.008 ************************************ 00:04:35.008 01:17:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:35.008 01:17:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.008 01:17:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.008 01:17:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.008 ************************************ 00:04:35.008 START TEST env_pci 00:04:35.008 ************************************ 00:04:35.008 01:17:30 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:35.008 00:04:35.008 00:04:35.008 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.008 http://cunit.sourceforge.net/ 00:04:35.008 00:04:35.008 00:04:35.008 Suite: pci 00:04:35.008 Test: pci_hook ...[2024-09-28 01:17:30.818967] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57282 has claimed it 00:04:35.008 passed 00:04:35.008 00:04:35.008 EAL: Cannot find device (10000:00:01.0) 00:04:35.008 EAL: Failed to attach device on primary process 00:04:35.009 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.009 suites 1 1 n/a 0 0 00:04:35.009 tests 1 1 1 0 0 00:04:35.009 asserts 25 25 25 0 n/a 00:04:35.009 00:04:35.009 Elapsed time = 0.007 seconds 00:04:35.009 00:04:35.009 real 0m0.081s 00:04:35.009 user 0m0.038s 00:04:35.009 sys 0m0.042s 00:04:35.009 01:17:30 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.009 01:17:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:35.009 ************************************ 00:04:35.009 END TEST env_pci 00:04:35.009 ************************************ 00:04:35.009 01:17:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:35.009 01:17:30 env -- env/env.sh@15 -- # uname 00:04:35.009 01:17:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:35.009 01:17:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:35.009 01:17:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:35.009 01:17:30 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:35.009 01:17:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.009 01:17:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.009 ************************************ 00:04:35.009 START TEST env_dpdk_post_init 00:04:35.009 ************************************ 00:04:35.009 01:17:30 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:35.267 EAL: Detected CPU lcores: 10 00:04:35.267 EAL: Detected NUMA nodes: 1 00:04:35.267 EAL: Detected shared linkage of DPDK 00:04:35.267 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:35.267 EAL: Selected IOVA mode 'PA' 00:04:35.267 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:35.267 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:35.267 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:35.267 Starting DPDK initialization... 00:04:35.267 Starting SPDK post initialization... 00:04:35.267 SPDK NVMe probe 00:04:35.267 Attaching to 0000:00:10.0 00:04:35.267 Attaching to 0000:00:11.0 00:04:35.267 Attached to 0000:00:10.0 00:04:35.267 Attached to 0000:00:11.0 00:04:35.267 Cleaning up... 00:04:35.526 00:04:35.526 real 0m0.280s 00:04:35.526 user 0m0.076s 00:04:35.526 sys 0m0.103s 00:04:35.526 01:17:31 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.526 ************************************ 00:04:35.526 END TEST env_dpdk_post_init 00:04:35.526 ************************************ 00:04:35.526 01:17:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:35.526 01:17:31 env -- env/env.sh@26 -- # uname 00:04:35.526 01:17:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:35.526 01:17:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:35.526 01:17:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.526 01:17:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.526 01:17:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.526 ************************************ 00:04:35.526 START TEST env_mem_callbacks 00:04:35.526 ************************************ 00:04:35.526 01:17:31 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:35.526 EAL: Detected CPU lcores: 10 00:04:35.526 EAL: Detected NUMA nodes: 1 00:04:35.526 EAL: Detected shared linkage of DPDK 00:04:35.526 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:35.526 EAL: Selected IOVA mode 'PA' 00:04:35.526 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:35.526 00:04:35.526 00:04:35.526 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.526 http://cunit.sourceforge.net/ 00:04:35.526 00:04:35.526 00:04:35.526 Suite: memory 00:04:35.526 Test: test ... 00:04:35.526 register 0x200000200000 2097152 00:04:35.526 malloc 3145728 00:04:35.526 register 0x200000400000 4194304 00:04:35.526 buf 0x2000004fffc0 len 3145728 PASSED 00:04:35.526 malloc 64 00:04:35.526 buf 0x2000004ffec0 len 64 PASSED 00:04:35.526 malloc 4194304 00:04:35.526 register 0x200000800000 6291456 00:04:35.526 buf 0x2000009fffc0 len 4194304 PASSED 00:04:35.526 free 0x2000004fffc0 3145728 00:04:35.526 free 0x2000004ffec0 64 00:04:35.785 unregister 0x200000400000 4194304 PASSED 00:04:35.785 free 0x2000009fffc0 4194304 00:04:35.785 unregister 0x200000800000 6291456 PASSED 00:04:35.785 malloc 8388608 00:04:35.785 register 0x200000400000 10485760 00:04:35.785 buf 0x2000005fffc0 len 8388608 PASSED 00:04:35.785 free 0x2000005fffc0 8388608 00:04:35.785 unregister 0x200000400000 10485760 PASSED 00:04:35.785 passed 00:04:35.785 00:04:35.785 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.785 suites 1 1 n/a 0 0 00:04:35.785 tests 1 1 1 0 0 00:04:35.785 asserts 15 15 15 0 n/a 00:04:35.785 00:04:35.785 Elapsed time = 0.053 seconds 00:04:35.785 00:04:35.785 real 0m0.254s 00:04:35.785 user 0m0.086s 00:04:35.786 sys 0m0.066s 00:04:35.786 01:17:31 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.786 ************************************ 00:04:35.786 END TEST env_mem_callbacks 00:04:35.786 ************************************ 00:04:35.786 01:17:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:35.786 00:04:35.786 real 0m7.675s 00:04:35.786 user 0m6.096s 00:04:35.786 sys 0m1.189s 00:04:35.786 01:17:31 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.786 ************************************ 00:04:35.786 END TEST env 00:04:35.786 01:17:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.786 ************************************ 00:04:35.786 01:17:31 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:35.786 01:17:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.786 01:17:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.786 01:17:31 -- common/autotest_common.sh@10 -- # set +x 00:04:35.786 ************************************ 00:04:35.786 START TEST rpc 00:04:35.786 ************************************ 00:04:35.786 01:17:31 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:35.786 * Looking for test storage... 00:04:35.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:35.786 01:17:31 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:35.786 01:17:31 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:35.786 01:17:31 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:36.045 01:17:31 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:36.045 01:17:31 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.045 01:17:31 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.045 01:17:31 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.045 01:17:31 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.045 01:17:31 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.045 01:17:31 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.045 01:17:31 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.045 01:17:31 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.045 01:17:31 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.045 01:17:31 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.045 01:17:31 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.045 01:17:31 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:36.045 01:17:31 rpc -- scripts/common.sh@345 -- # : 1 00:04:36.045 01:17:31 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.045 01:17:31 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.045 01:17:31 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:36.045 01:17:31 rpc -- scripts/common.sh@353 -- # local d=1 00:04:36.045 01:17:31 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.045 01:17:31 rpc -- scripts/common.sh@355 -- # echo 1 00:04:36.045 01:17:31 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.045 01:17:31 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:36.045 01:17:31 rpc -- scripts/common.sh@353 -- # local d=2 00:04:36.045 01:17:31 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.045 01:17:31 rpc -- scripts/common.sh@355 -- # echo 2 00:04:36.045 01:17:31 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.045 01:17:31 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.045 01:17:31 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.045 01:17:31 rpc -- scripts/common.sh@368 -- # return 0 00:04:36.045 01:17:31 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.045 01:17:31 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:36.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.045 --rc genhtml_branch_coverage=1 00:04:36.045 --rc genhtml_function_coverage=1 00:04:36.045 --rc genhtml_legend=1 00:04:36.045 --rc geninfo_all_blocks=1 00:04:36.045 --rc geninfo_unexecuted_blocks=1 00:04:36.045 00:04:36.045 ' 00:04:36.045 01:17:31 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:36.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.046 --rc genhtml_branch_coverage=1 00:04:36.046 --rc genhtml_function_coverage=1 00:04:36.046 --rc genhtml_legend=1 00:04:36.046 --rc geninfo_all_blocks=1 00:04:36.046 --rc geninfo_unexecuted_blocks=1 00:04:36.046 00:04:36.046 ' 00:04:36.046 01:17:31 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:36.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.046 --rc genhtml_branch_coverage=1 00:04:36.046 --rc genhtml_function_coverage=1 00:04:36.046 --rc genhtml_legend=1 00:04:36.046 --rc geninfo_all_blocks=1 00:04:36.046 --rc geninfo_unexecuted_blocks=1 00:04:36.046 00:04:36.046 ' 00:04:36.046 01:17:31 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:36.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.046 --rc genhtml_branch_coverage=1 00:04:36.046 --rc genhtml_function_coverage=1 00:04:36.046 --rc genhtml_legend=1 00:04:36.046 --rc geninfo_all_blocks=1 00:04:36.046 --rc geninfo_unexecuted_blocks=1 00:04:36.046 00:04:36.046 ' 00:04:36.046 01:17:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57409 00:04:36.046 01:17:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.046 01:17:31 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:36.046 01:17:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57409 00:04:36.046 01:17:31 rpc -- common/autotest_common.sh@831 -- # '[' -z 57409 ']' 00:04:36.046 01:17:31 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.046 01:17:31 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:36.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.046 01:17:31 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.046 01:17:31 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:36.046 01:17:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.046 [2024-09-28 01:17:31.928376] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:36.046 [2024-09-28 01:17:31.928582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57409 ] 00:04:36.305 [2024-09-28 01:17:32.102424] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.564 [2024-09-28 01:17:32.272599] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:36.564 [2024-09-28 01:17:32.272692] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57409' to capture a snapshot of events at runtime. 00:04:36.564 [2024-09-28 01:17:32.272708] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:36.564 [2024-09-28 01:17:32.272720] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:36.564 [2024-09-28 01:17:32.272730] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57409 for offline analysis/debug. 00:04:36.564 [2024-09-28 01:17:32.272795] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.564 [2024-09-28 01:17:32.459584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:37.132 01:17:32 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:37.132 01:17:32 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:37.132 01:17:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.132 01:17:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.132 01:17:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:37.132 01:17:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:37.132 01:17:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.132 01:17:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.132 01:17:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.132 ************************************ 00:04:37.132 START TEST rpc_integrity 00:04:37.132 ************************************ 00:04:37.132 01:17:32 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:37.132 01:17:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.132 01:17:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.132 01:17:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.132 01:17:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.132 01:17:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.132 01:17:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:37.132 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.132 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.132 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.132 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.132 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.132 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:37.132 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.132 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.132 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.132 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.132 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.132 { 00:04:37.132 "name": "Malloc0", 00:04:37.132 "aliases": [ 00:04:37.132 "4b2ae599-2e2b-4ff0-b8c4-84957dd498ee" 00:04:37.132 ], 00:04:37.132 "product_name": "Malloc disk", 00:04:37.132 "block_size": 512, 00:04:37.132 "num_blocks": 16384, 00:04:37.132 "uuid": "4b2ae599-2e2b-4ff0-b8c4-84957dd498ee", 00:04:37.132 "assigned_rate_limits": { 00:04:37.132 "rw_ios_per_sec": 0, 00:04:37.132 "rw_mbytes_per_sec": 0, 00:04:37.132 "r_mbytes_per_sec": 0, 00:04:37.132 "w_mbytes_per_sec": 0 00:04:37.132 }, 00:04:37.132 "claimed": false, 00:04:37.132 "zoned": false, 00:04:37.132 "supported_io_types": { 00:04:37.132 "read": true, 00:04:37.132 "write": true, 00:04:37.132 "unmap": true, 00:04:37.132 "flush": true, 00:04:37.132 "reset": true, 00:04:37.132 "nvme_admin": false, 00:04:37.132 "nvme_io": false, 00:04:37.132 "nvme_io_md": false, 00:04:37.132 "write_zeroes": true, 00:04:37.132 "zcopy": true, 00:04:37.132 "get_zone_info": false, 00:04:37.132 "zone_management": false, 00:04:37.132 "zone_append": false, 00:04:37.132 "compare": false, 00:04:37.132 "compare_and_write": false, 00:04:37.132 "abort": true, 00:04:37.132 "seek_hole": false, 00:04:37.132 "seek_data": false, 00:04:37.132 "copy": true, 00:04:37.132 "nvme_iov_md": false 00:04:37.132 }, 00:04:37.132 "memory_domains": [ 00:04:37.132 { 00:04:37.132 "dma_device_id": "system", 00:04:37.132 "dma_device_type": 1 00:04:37.132 }, 00:04:37.132 { 00:04:37.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.132 "dma_device_type": 2 00:04:37.132 } 00:04:37.132 ], 00:04:37.132 "driver_specific": {} 00:04:37.132 } 00:04:37.132 ]' 00:04:37.133 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:37.392 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.392 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:37.392 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.392 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.392 [2024-09-28 01:17:33.107030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:37.392 [2024-09-28 01:17:33.107124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.392 [2024-09-28 01:17:33.107179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:04:37.392 [2024-09-28 01:17:33.107199] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.392 [2024-09-28 01:17:33.110271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.392 [2024-09-28 01:17:33.110343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.392 Passthru0 00:04:37.392 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.392 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.392 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.392 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.392 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.392 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.392 { 00:04:37.392 "name": "Malloc0", 00:04:37.392 "aliases": [ 00:04:37.392 "4b2ae599-2e2b-4ff0-b8c4-84957dd498ee" 00:04:37.392 ], 00:04:37.392 "product_name": "Malloc disk", 00:04:37.392 "block_size": 512, 00:04:37.392 "num_blocks": 16384, 00:04:37.392 "uuid": "4b2ae599-2e2b-4ff0-b8c4-84957dd498ee", 00:04:37.392 "assigned_rate_limits": { 00:04:37.392 "rw_ios_per_sec": 0, 00:04:37.392 "rw_mbytes_per_sec": 0, 00:04:37.392 "r_mbytes_per_sec": 0, 00:04:37.392 "w_mbytes_per_sec": 0 00:04:37.392 }, 00:04:37.392 "claimed": true, 00:04:37.392 "claim_type": "exclusive_write", 00:04:37.392 "zoned": false, 00:04:37.392 "supported_io_types": { 00:04:37.392 "read": true, 00:04:37.392 "write": true, 00:04:37.392 "unmap": true, 00:04:37.392 "flush": true, 00:04:37.392 "reset": true, 00:04:37.392 "nvme_admin": false, 00:04:37.392 "nvme_io": false, 00:04:37.392 "nvme_io_md": false, 00:04:37.392 "write_zeroes": true, 00:04:37.392 "zcopy": true, 00:04:37.392 "get_zone_info": false, 00:04:37.392 "zone_management": false, 00:04:37.392 "zone_append": false, 00:04:37.392 "compare": false, 00:04:37.392 "compare_and_write": false, 00:04:37.392 "abort": true, 00:04:37.392 "seek_hole": false, 00:04:37.392 "seek_data": false, 00:04:37.392 "copy": true, 00:04:37.392 "nvme_iov_md": false 00:04:37.392 }, 00:04:37.392 "memory_domains": [ 00:04:37.392 { 00:04:37.392 "dma_device_id": "system", 00:04:37.392 "dma_device_type": 1 00:04:37.392 }, 00:04:37.392 { 00:04:37.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.392 "dma_device_type": 2 00:04:37.392 } 00:04:37.392 ], 00:04:37.392 "driver_specific": {} 00:04:37.392 }, 00:04:37.392 { 00:04:37.392 "name": "Passthru0", 00:04:37.392 "aliases": [ 00:04:37.392 "143a17dd-fbd2-5c0d-85c9-71abdbf0d0ed" 00:04:37.392 ], 00:04:37.392 "product_name": "passthru", 00:04:37.392 "block_size": 512, 00:04:37.392 "num_blocks": 16384, 00:04:37.392 "uuid": "143a17dd-fbd2-5c0d-85c9-71abdbf0d0ed", 00:04:37.392 "assigned_rate_limits": { 00:04:37.392 "rw_ios_per_sec": 0, 00:04:37.392 "rw_mbytes_per_sec": 0, 00:04:37.392 "r_mbytes_per_sec": 0, 00:04:37.392 "w_mbytes_per_sec": 0 00:04:37.392 }, 00:04:37.392 "claimed": false, 00:04:37.392 "zoned": false, 00:04:37.392 "supported_io_types": { 00:04:37.392 "read": true, 00:04:37.392 "write": true, 00:04:37.392 "unmap": true, 00:04:37.392 "flush": true, 00:04:37.392 "reset": true, 00:04:37.392 "nvme_admin": false, 00:04:37.392 "nvme_io": false, 00:04:37.392 "nvme_io_md": false, 00:04:37.392 "write_zeroes": true, 00:04:37.392 "zcopy": true, 00:04:37.392 "get_zone_info": false, 00:04:37.392 "zone_management": false, 00:04:37.392 "zone_append": false, 00:04:37.392 "compare": false, 00:04:37.392 "compare_and_write": false, 00:04:37.392 "abort": true, 00:04:37.392 "seek_hole": false, 00:04:37.392 "seek_data": false, 00:04:37.392 "copy": true, 00:04:37.392 "nvme_iov_md": false 00:04:37.392 }, 00:04:37.392 "memory_domains": [ 00:04:37.392 { 00:04:37.392 "dma_device_id": "system", 00:04:37.392 "dma_device_type": 1 00:04:37.392 }, 00:04:37.392 { 00:04:37.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.392 "dma_device_type": 2 00:04:37.392 } 00:04:37.392 ], 00:04:37.392 "driver_specific": { 00:04:37.392 "passthru": { 00:04:37.392 "name": "Passthru0", 00:04:37.392 "base_bdev_name": "Malloc0" 00:04:37.392 } 00:04:37.392 } 00:04:37.392 } 00:04:37.392 ]' 00:04:37.392 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:37.392 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.392 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.392 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.392 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.392 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.392 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:37.392 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.392 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.392 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.392 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.392 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.392 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.392 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.392 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.392 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:37.392 01:17:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.392 00:04:37.392 real 0m0.337s 00:04:37.392 user 0m0.209s 00:04:37.392 sys 0m0.040s 00:04:37.392 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.392 ************************************ 00:04:37.392 END TEST rpc_integrity 00:04:37.392 01:17:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.392 ************************************ 00:04:37.651 01:17:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:37.651 01:17:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.651 01:17:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.651 01:17:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.651 ************************************ 00:04:37.651 START TEST rpc_plugins 00:04:37.651 ************************************ 00:04:37.651 01:17:33 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:37.651 01:17:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:37.651 01:17:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.651 01:17:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.651 01:17:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.651 01:17:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:37.651 01:17:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:37.651 01:17:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.651 01:17:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.651 01:17:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.651 01:17:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:37.651 { 00:04:37.651 "name": "Malloc1", 00:04:37.651 "aliases": [ 00:04:37.651 "aa7d5af5-c337-44f3-8651-12f5f37936c8" 00:04:37.651 ], 00:04:37.651 "product_name": "Malloc disk", 00:04:37.651 "block_size": 4096, 00:04:37.651 "num_blocks": 256, 00:04:37.651 "uuid": "aa7d5af5-c337-44f3-8651-12f5f37936c8", 00:04:37.651 "assigned_rate_limits": { 00:04:37.651 "rw_ios_per_sec": 0, 00:04:37.651 "rw_mbytes_per_sec": 0, 00:04:37.651 "r_mbytes_per_sec": 0, 00:04:37.651 "w_mbytes_per_sec": 0 00:04:37.651 }, 00:04:37.651 "claimed": false, 00:04:37.651 "zoned": false, 00:04:37.651 "supported_io_types": { 00:04:37.651 "read": true, 00:04:37.651 "write": true, 00:04:37.652 "unmap": true, 00:04:37.652 "flush": true, 00:04:37.652 "reset": true, 00:04:37.652 "nvme_admin": false, 00:04:37.652 "nvme_io": false, 00:04:37.652 "nvme_io_md": false, 00:04:37.652 "write_zeroes": true, 00:04:37.652 "zcopy": true, 00:04:37.652 "get_zone_info": false, 00:04:37.652 "zone_management": false, 00:04:37.652 "zone_append": false, 00:04:37.652 "compare": false, 00:04:37.652 "compare_and_write": false, 00:04:37.652 "abort": true, 00:04:37.652 "seek_hole": false, 00:04:37.652 "seek_data": false, 00:04:37.652 "copy": true, 00:04:37.652 "nvme_iov_md": false 00:04:37.652 }, 00:04:37.652 "memory_domains": [ 00:04:37.652 { 00:04:37.652 "dma_device_id": "system", 00:04:37.652 "dma_device_type": 1 00:04:37.652 }, 00:04:37.652 { 00:04:37.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.652 "dma_device_type": 2 00:04:37.652 } 00:04:37.652 ], 00:04:37.652 "driver_specific": {} 00:04:37.652 } 00:04:37.652 ]' 00:04:37.652 01:17:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:37.652 01:17:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:37.652 01:17:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:37.652 01:17:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.652 01:17:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.652 01:17:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.652 01:17:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:37.652 01:17:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.652 01:17:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.652 01:17:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.652 01:17:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:37.652 01:17:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:37.652 01:17:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:37.652 00:04:37.652 real 0m0.146s 00:04:37.652 user 0m0.092s 00:04:37.652 sys 0m0.016s 00:04:37.652 01:17:33 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.652 01:17:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.652 ************************************ 00:04:37.652 END TEST rpc_plugins 00:04:37.652 ************************************ 00:04:37.652 01:17:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:37.652 01:17:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.652 01:17:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.652 01:17:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.652 ************************************ 00:04:37.652 START TEST rpc_trace_cmd_test 00:04:37.652 ************************************ 00:04:37.652 01:17:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:37.652 01:17:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:37.652 01:17:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:37.652 01:17:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.652 01:17:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:37.652 01:17:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.652 01:17:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:37.652 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57409", 00:04:37.652 "tpoint_group_mask": "0x8", 00:04:37.652 "iscsi_conn": { 00:04:37.652 "mask": "0x2", 00:04:37.652 "tpoint_mask": "0x0" 00:04:37.652 }, 00:04:37.652 "scsi": { 00:04:37.652 "mask": "0x4", 00:04:37.652 "tpoint_mask": "0x0" 00:04:37.652 }, 00:04:37.652 "bdev": { 00:04:37.652 "mask": "0x8", 00:04:37.652 "tpoint_mask": "0xffffffffffffffff" 00:04:37.652 }, 00:04:37.652 "nvmf_rdma": { 00:04:37.652 "mask": "0x10", 00:04:37.652 "tpoint_mask": "0x0" 00:04:37.652 }, 00:04:37.652 "nvmf_tcp": { 00:04:37.652 "mask": "0x20", 00:04:37.652 "tpoint_mask": "0x0" 00:04:37.652 }, 00:04:37.652 "ftl": { 00:04:37.652 "mask": "0x40", 00:04:37.652 "tpoint_mask": "0x0" 00:04:37.652 }, 00:04:37.652 "blobfs": { 00:04:37.652 "mask": "0x80", 00:04:37.652 "tpoint_mask": "0x0" 00:04:37.652 }, 00:04:37.652 "dsa": { 00:04:37.652 "mask": "0x200", 00:04:37.652 "tpoint_mask": "0x0" 00:04:37.652 }, 00:04:37.652 "thread": { 00:04:37.652 "mask": "0x400", 00:04:37.652 "tpoint_mask": "0x0" 00:04:37.652 }, 00:04:37.652 "nvme_pcie": { 00:04:37.652 "mask": "0x800", 00:04:37.652 "tpoint_mask": "0x0" 00:04:37.652 }, 00:04:37.652 "iaa": { 00:04:37.652 "mask": "0x1000", 00:04:37.652 "tpoint_mask": "0x0" 00:04:37.652 }, 00:04:37.652 "nvme_tcp": { 00:04:37.652 "mask": "0x2000", 00:04:37.652 "tpoint_mask": "0x0" 00:04:37.652 }, 00:04:37.652 "bdev_nvme": { 00:04:37.652 "mask": "0x4000", 00:04:37.652 "tpoint_mask": "0x0" 00:04:37.652 }, 00:04:37.652 "sock": { 00:04:37.652 "mask": "0x8000", 00:04:37.652 "tpoint_mask": "0x0" 00:04:37.652 }, 00:04:37.652 "blob": { 00:04:37.652 "mask": "0x10000", 00:04:37.652 "tpoint_mask": "0x0" 00:04:37.652 }, 00:04:37.652 "bdev_raid": { 00:04:37.652 "mask": "0x20000", 00:04:37.652 "tpoint_mask": "0x0" 00:04:37.652 } 00:04:37.652 }' 00:04:37.652 01:17:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:37.911 01:17:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:37.911 01:17:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:37.911 01:17:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:37.911 01:17:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:37.911 01:17:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:37.911 01:17:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:37.911 01:17:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:37.911 01:17:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:37.911 01:17:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:37.911 00:04:37.911 real 0m0.262s 00:04:37.911 user 0m0.229s 00:04:37.911 sys 0m0.025s 00:04:37.911 01:17:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.911 01:17:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:37.911 ************************************ 00:04:37.911 END TEST rpc_trace_cmd_test 00:04:37.911 ************************************ 00:04:37.911 01:17:33 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:37.911 01:17:33 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:37.911 01:17:33 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:37.911 01:17:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.911 01:17:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.911 01:17:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.171 ************************************ 00:04:38.171 START TEST rpc_daemon_integrity 00:04:38.171 ************************************ 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:38.171 { 00:04:38.171 "name": "Malloc2", 00:04:38.171 "aliases": [ 00:04:38.171 "dac6df24-6c96-47f6-b364-914d8881c2c3" 00:04:38.171 ], 00:04:38.171 "product_name": "Malloc disk", 00:04:38.171 "block_size": 512, 00:04:38.171 "num_blocks": 16384, 00:04:38.171 "uuid": "dac6df24-6c96-47f6-b364-914d8881c2c3", 00:04:38.171 "assigned_rate_limits": { 00:04:38.171 "rw_ios_per_sec": 0, 00:04:38.171 "rw_mbytes_per_sec": 0, 00:04:38.171 "r_mbytes_per_sec": 0, 00:04:38.171 "w_mbytes_per_sec": 0 00:04:38.171 }, 00:04:38.171 "claimed": false, 00:04:38.171 "zoned": false, 00:04:38.171 "supported_io_types": { 00:04:38.171 "read": true, 00:04:38.171 "write": true, 00:04:38.171 "unmap": true, 00:04:38.171 "flush": true, 00:04:38.171 "reset": true, 00:04:38.171 "nvme_admin": false, 00:04:38.171 "nvme_io": false, 00:04:38.171 "nvme_io_md": false, 00:04:38.171 "write_zeroes": true, 00:04:38.171 "zcopy": true, 00:04:38.171 "get_zone_info": false, 00:04:38.171 "zone_management": false, 00:04:38.171 "zone_append": false, 00:04:38.171 "compare": false, 00:04:38.171 "compare_and_write": false, 00:04:38.171 "abort": true, 00:04:38.171 "seek_hole": false, 00:04:38.171 "seek_data": false, 00:04:38.171 "copy": true, 00:04:38.171 "nvme_iov_md": false 00:04:38.171 }, 00:04:38.171 "memory_domains": [ 00:04:38.171 { 00:04:38.171 "dma_device_id": "system", 00:04:38.171 "dma_device_type": 1 00:04:38.171 }, 00:04:38.171 { 00:04:38.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.171 "dma_device_type": 2 00:04:38.171 } 00:04:38.171 ], 00:04:38.171 "driver_specific": {} 00:04:38.171 } 00:04:38.171 ]' 00:04:38.171 01:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:38.171 01:17:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.171 01:17:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:38.171 01:17:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.171 01:17:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.171 [2024-09-28 01:17:34.010946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:38.171 [2024-09-28 01:17:34.011057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.171 [2024-09-28 01:17:34.011114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:04:38.171 [2024-09-28 01:17:34.011130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.171 [2024-09-28 01:17:34.014263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.171 [2024-09-28 01:17:34.014320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.171 Passthru0 00:04:38.171 01:17:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.171 01:17:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.171 01:17:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.171 01:17:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.171 01:17:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.171 01:17:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.171 { 00:04:38.171 "name": "Malloc2", 00:04:38.171 "aliases": [ 00:04:38.171 "dac6df24-6c96-47f6-b364-914d8881c2c3" 00:04:38.171 ], 00:04:38.171 "product_name": "Malloc disk", 00:04:38.171 "block_size": 512, 00:04:38.171 "num_blocks": 16384, 00:04:38.171 "uuid": "dac6df24-6c96-47f6-b364-914d8881c2c3", 00:04:38.171 "assigned_rate_limits": { 00:04:38.171 "rw_ios_per_sec": 0, 00:04:38.171 "rw_mbytes_per_sec": 0, 00:04:38.171 "r_mbytes_per_sec": 0, 00:04:38.171 "w_mbytes_per_sec": 0 00:04:38.171 }, 00:04:38.171 "claimed": true, 00:04:38.171 "claim_type": "exclusive_write", 00:04:38.171 "zoned": false, 00:04:38.171 "supported_io_types": { 00:04:38.171 "read": true, 00:04:38.171 "write": true, 00:04:38.171 "unmap": true, 00:04:38.171 "flush": true, 00:04:38.171 "reset": true, 00:04:38.171 "nvme_admin": false, 00:04:38.171 "nvme_io": false, 00:04:38.171 "nvme_io_md": false, 00:04:38.171 "write_zeroes": true, 00:04:38.171 "zcopy": true, 00:04:38.171 "get_zone_info": false, 00:04:38.171 "zone_management": false, 00:04:38.171 "zone_append": false, 00:04:38.171 "compare": false, 00:04:38.171 "compare_and_write": false, 00:04:38.171 "abort": true, 00:04:38.171 "seek_hole": false, 00:04:38.171 "seek_data": false, 00:04:38.171 "copy": true, 00:04:38.171 "nvme_iov_md": false 00:04:38.171 }, 00:04:38.171 "memory_domains": [ 00:04:38.171 { 00:04:38.171 "dma_device_id": "system", 00:04:38.171 "dma_device_type": 1 00:04:38.171 }, 00:04:38.171 { 00:04:38.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.171 "dma_device_type": 2 00:04:38.171 } 00:04:38.171 ], 00:04:38.171 "driver_specific": {} 00:04:38.171 }, 00:04:38.171 { 00:04:38.171 "name": "Passthru0", 00:04:38.171 "aliases": [ 00:04:38.171 "ca05ad6a-dc46-54c0-8b87-0a211810e690" 00:04:38.171 ], 00:04:38.171 "product_name": "passthru", 00:04:38.172 "block_size": 512, 00:04:38.172 "num_blocks": 16384, 00:04:38.172 "uuid": "ca05ad6a-dc46-54c0-8b87-0a211810e690", 00:04:38.172 "assigned_rate_limits": { 00:04:38.172 "rw_ios_per_sec": 0, 00:04:38.172 "rw_mbytes_per_sec": 0, 00:04:38.172 "r_mbytes_per_sec": 0, 00:04:38.172 "w_mbytes_per_sec": 0 00:04:38.172 }, 00:04:38.172 "claimed": false, 00:04:38.172 "zoned": false, 00:04:38.172 "supported_io_types": { 00:04:38.172 "read": true, 00:04:38.172 "write": true, 00:04:38.172 "unmap": true, 00:04:38.172 "flush": true, 00:04:38.172 "reset": true, 00:04:38.172 "nvme_admin": false, 00:04:38.172 "nvme_io": false, 00:04:38.172 "nvme_io_md": false, 00:04:38.172 "write_zeroes": true, 00:04:38.172 "zcopy": true, 00:04:38.172 "get_zone_info": false, 00:04:38.172 "zone_management": false, 00:04:38.172 "zone_append": false, 00:04:38.172 "compare": false, 00:04:38.172 "compare_and_write": false, 00:04:38.172 "abort": true, 00:04:38.172 "seek_hole": false, 00:04:38.172 "seek_data": false, 00:04:38.172 "copy": true, 00:04:38.172 "nvme_iov_md": false 00:04:38.172 }, 00:04:38.172 "memory_domains": [ 00:04:38.172 { 00:04:38.172 "dma_device_id": "system", 00:04:38.172 "dma_device_type": 1 00:04:38.172 }, 00:04:38.172 { 00:04:38.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.172 "dma_device_type": 2 00:04:38.172 } 00:04:38.172 ], 00:04:38.172 "driver_specific": { 00:04:38.172 "passthru": { 00:04:38.172 "name": "Passthru0", 00:04:38.172 "base_bdev_name": "Malloc2" 00:04:38.172 } 00:04:38.172 } 00:04:38.172 } 00:04:38.172 ]' 00:04:38.172 01:17:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.431 00:04:38.431 real 0m0.346s 00:04:38.431 user 0m0.210s 00:04:38.431 sys 0m0.050s 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.431 ************************************ 00:04:38.431 END TEST rpc_daemon_integrity 00:04:38.431 01:17:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.431 ************************************ 00:04:38.431 01:17:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:38.431 01:17:34 rpc -- rpc/rpc.sh@84 -- # killprocess 57409 00:04:38.431 01:17:34 rpc -- common/autotest_common.sh@950 -- # '[' -z 57409 ']' 00:04:38.431 01:17:34 rpc -- common/autotest_common.sh@954 -- # kill -0 57409 00:04:38.431 01:17:34 rpc -- common/autotest_common.sh@955 -- # uname 00:04:38.431 01:17:34 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:38.431 01:17:34 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57409 00:04:38.431 01:17:34 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:38.431 01:17:34 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:38.431 killing process with pid 57409 00:04:38.431 01:17:34 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57409' 00:04:38.431 01:17:34 rpc -- common/autotest_common.sh@969 -- # kill 57409 00:04:38.431 01:17:34 rpc -- common/autotest_common.sh@974 -- # wait 57409 00:04:40.335 00:04:40.335 real 0m4.581s 00:04:40.335 user 0m5.338s 00:04:40.335 sys 0m0.772s 00:04:40.335 01:17:36 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.335 01:17:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.335 ************************************ 00:04:40.335 END TEST rpc 00:04:40.335 ************************************ 00:04:40.335 01:17:36 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:40.335 01:17:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.335 01:17:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.335 01:17:36 -- common/autotest_common.sh@10 -- # set +x 00:04:40.335 ************************************ 00:04:40.335 START TEST skip_rpc 00:04:40.335 ************************************ 00:04:40.335 01:17:36 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:40.595 * Looking for test storage... 00:04:40.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:40.595 01:17:36 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:40.595 01:17:36 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:40.595 01:17:36 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:40.595 01:17:36 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.595 01:17:36 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:40.595 01:17:36 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.595 01:17:36 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:40.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.595 --rc genhtml_branch_coverage=1 00:04:40.595 --rc genhtml_function_coverage=1 00:04:40.595 --rc genhtml_legend=1 00:04:40.595 --rc geninfo_all_blocks=1 00:04:40.595 --rc geninfo_unexecuted_blocks=1 00:04:40.595 00:04:40.595 ' 00:04:40.595 01:17:36 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:40.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.595 --rc genhtml_branch_coverage=1 00:04:40.595 --rc genhtml_function_coverage=1 00:04:40.595 --rc genhtml_legend=1 00:04:40.595 --rc geninfo_all_blocks=1 00:04:40.595 --rc geninfo_unexecuted_blocks=1 00:04:40.595 00:04:40.595 ' 00:04:40.595 01:17:36 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:40.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.595 --rc genhtml_branch_coverage=1 00:04:40.595 --rc genhtml_function_coverage=1 00:04:40.595 --rc genhtml_legend=1 00:04:40.595 --rc geninfo_all_blocks=1 00:04:40.595 --rc geninfo_unexecuted_blocks=1 00:04:40.595 00:04:40.595 ' 00:04:40.595 01:17:36 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:40.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.595 --rc genhtml_branch_coverage=1 00:04:40.595 --rc genhtml_function_coverage=1 00:04:40.595 --rc genhtml_legend=1 00:04:40.595 --rc geninfo_all_blocks=1 00:04:40.595 --rc geninfo_unexecuted_blocks=1 00:04:40.595 00:04:40.595 ' 00:04:40.595 01:17:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.595 01:17:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:40.595 01:17:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:40.595 01:17:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.595 01:17:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.595 01:17:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.595 ************************************ 00:04:40.595 START TEST skip_rpc 00:04:40.595 ************************************ 00:04:40.595 01:17:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:40.595 01:17:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57633 00:04:40.595 01:17:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.595 01:17:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:40.595 01:17:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:40.854 [2024-09-28 01:17:36.562440] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:40.854 [2024-09-28 01:17:36.562635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57633 ] 00:04:40.854 [2024-09-28 01:17:36.735428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.113 [2024-09-28 01:17:36.903226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.372 [2024-09-28 01:17:37.092399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57633 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57633 ']' 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57633 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57633 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:45.565 killing process with pid 57633 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57633' 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57633 00:04:45.565 01:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57633 00:04:47.496 00:04:47.496 real 0m6.871s 00:04:47.496 user 0m6.434s 00:04:47.496 sys 0m0.339s 00:04:47.496 01:17:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.496 01:17:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.496 ************************************ 00:04:47.496 END TEST skip_rpc 00:04:47.496 ************************************ 00:04:47.496 01:17:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:47.496 01:17:43 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.496 01:17:43 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.496 01:17:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.496 ************************************ 00:04:47.496 START TEST skip_rpc_with_json 00:04:47.496 ************************************ 00:04:47.496 01:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:47.496 01:17:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:47.496 01:17:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57731 00:04:47.496 01:17:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.496 01:17:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.496 01:17:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57731 00:04:47.496 01:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57731 ']' 00:04:47.496 01:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.496 01:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.496 01:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.496 01:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.496 01:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.755 [2024-09-28 01:17:43.487045] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:47.755 [2024-09-28 01:17:43.487246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57731 ] 00:04:47.755 [2024-09-28 01:17:43.655887] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.014 [2024-09-28 01:17:43.804960] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.273 [2024-09-28 01:17:43.989068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:48.532 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.532 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:48.532 01:17:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:48.532 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.532 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.532 [2024-09-28 01:17:44.451496] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:48.532 request: 00:04:48.532 { 00:04:48.532 "trtype": "tcp", 00:04:48.532 "method": "nvmf_get_transports", 00:04:48.532 "req_id": 1 00:04:48.532 } 00:04:48.532 Got JSON-RPC error response 00:04:48.532 response: 00:04:48.532 { 00:04:48.532 "code": -19, 00:04:48.532 "message": "No such device" 00:04:48.532 } 00:04:48.532 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:48.532 01:17:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:48.532 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.532 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.791 [2024-09-28 01:17:44.463843] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.791 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.791 01:17:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:48.791 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.791 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.791 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.791 01:17:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:48.791 { 00:04:48.791 "subsystems": [ 00:04:48.791 { 00:04:48.791 "subsystem": "fsdev", 00:04:48.791 "config": [ 00:04:48.791 { 00:04:48.791 "method": "fsdev_set_opts", 00:04:48.791 "params": { 00:04:48.791 "fsdev_io_pool_size": 65535, 00:04:48.791 "fsdev_io_cache_size": 256 00:04:48.791 } 00:04:48.791 } 00:04:48.791 ] 00:04:48.791 }, 00:04:48.791 { 00:04:48.791 "subsystem": "vfio_user_target", 00:04:48.791 "config": null 00:04:48.791 }, 00:04:48.791 { 00:04:48.791 "subsystem": "keyring", 00:04:48.791 "config": [] 00:04:48.791 }, 00:04:48.791 { 00:04:48.791 "subsystem": "iobuf", 00:04:48.791 "config": [ 00:04:48.791 { 00:04:48.791 "method": "iobuf_set_options", 00:04:48.791 "params": { 00:04:48.791 "small_pool_count": 8192, 00:04:48.791 "large_pool_count": 1024, 00:04:48.791 "small_bufsize": 8192, 00:04:48.791 "large_bufsize": 135168 00:04:48.791 } 00:04:48.791 } 00:04:48.791 ] 00:04:48.791 }, 00:04:48.791 { 00:04:48.791 "subsystem": "sock", 00:04:48.791 "config": [ 00:04:48.791 { 00:04:48.791 "method": "sock_set_default_impl", 00:04:48.791 "params": { 00:04:48.791 "impl_name": "uring" 00:04:48.791 } 00:04:48.791 }, 00:04:48.791 { 00:04:48.791 "method": "sock_impl_set_options", 00:04:48.791 "params": { 00:04:48.791 "impl_name": "ssl", 00:04:48.791 "recv_buf_size": 4096, 00:04:48.791 "send_buf_size": 4096, 00:04:48.791 "enable_recv_pipe": true, 00:04:48.791 "enable_quickack": false, 00:04:48.791 "enable_placement_id": 0, 00:04:48.792 "enable_zerocopy_send_server": true, 00:04:48.792 "enable_zerocopy_send_client": false, 00:04:48.792 "zerocopy_threshold": 0, 00:04:48.792 "tls_version": 0, 00:04:48.792 "enable_ktls": false 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "sock_impl_set_options", 00:04:48.792 "params": { 00:04:48.792 "impl_name": "posix", 00:04:48.792 "recv_buf_size": 2097152, 00:04:48.792 "send_buf_size": 2097152, 00:04:48.792 "enable_recv_pipe": true, 00:04:48.792 "enable_quickack": false, 00:04:48.792 "enable_placement_id": 0, 00:04:48.792 "enable_zerocopy_send_server": true, 00:04:48.792 "enable_zerocopy_send_client": false, 00:04:48.792 "zerocopy_threshold": 0, 00:04:48.792 "tls_version": 0, 00:04:48.792 "enable_ktls": false 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "sock_impl_set_options", 00:04:48.792 "params": { 00:04:48.792 "impl_name": "uring", 00:04:48.792 "recv_buf_size": 2097152, 00:04:48.792 "send_buf_size": 2097152, 00:04:48.792 "enable_recv_pipe": true, 00:04:48.792 "enable_quickack": false, 00:04:48.792 "enable_placement_id": 0, 00:04:48.792 "enable_zerocopy_send_server": false, 00:04:48.792 "enable_zerocopy_send_client": false, 00:04:48.792 "zerocopy_threshold": 0, 00:04:48.792 "tls_version": 0, 00:04:48.792 "enable_ktls": false 00:04:48.792 } 00:04:48.792 } 00:04:48.792 ] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "vmd", 00:04:48.792 "config": [] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "accel", 00:04:48.792 "config": [ 00:04:48.792 { 00:04:48.792 "method": "accel_set_options", 00:04:48.792 "params": { 00:04:48.792 "small_cache_size": 128, 00:04:48.792 "large_cache_size": 16, 00:04:48.792 "task_count": 2048, 00:04:48.792 "sequence_count": 2048, 00:04:48.792 "buf_count": 2048 00:04:48.792 } 00:04:48.792 } 00:04:48.792 ] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "bdev", 00:04:48.792 "config": [ 00:04:48.792 { 00:04:48.792 "method": "bdev_set_options", 00:04:48.792 "params": { 00:04:48.792 "bdev_io_pool_size": 65535, 00:04:48.792 "bdev_io_cache_size": 256, 00:04:48.792 "bdev_auto_examine": true, 00:04:48.792 "iobuf_small_cache_size": 128, 00:04:48.792 "iobuf_large_cache_size": 16 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "bdev_raid_set_options", 00:04:48.792 "params": { 00:04:48.792 "process_window_size_kb": 1024, 00:04:48.792 "process_max_bandwidth_mb_sec": 0 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "bdev_iscsi_set_options", 00:04:48.792 "params": { 00:04:48.792 "timeout_sec": 30 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "bdev_nvme_set_options", 00:04:48.792 "params": { 00:04:48.792 "action_on_timeout": "none", 00:04:48.792 "timeout_us": 0, 00:04:48.792 "timeout_admin_us": 0, 00:04:48.792 "keep_alive_timeout_ms": 10000, 00:04:48.792 "arbitration_burst": 0, 00:04:48.792 "low_priority_weight": 0, 00:04:48.792 "medium_priority_weight": 0, 00:04:48.792 "high_priority_weight": 0, 00:04:48.792 "nvme_adminq_poll_period_us": 10000, 00:04:48.792 "nvme_ioq_poll_period_us": 0, 00:04:48.792 "io_queue_requests": 0, 00:04:48.792 "delay_cmd_submit": true, 00:04:48.792 "transport_retry_count": 4, 00:04:48.792 "bdev_retry_count": 3, 00:04:48.792 "transport_ack_timeout": 0, 00:04:48.792 "ctrlr_loss_timeout_sec": 0, 00:04:48.792 "reconnect_delay_sec": 0, 00:04:48.792 "fast_io_fail_timeout_sec": 0, 00:04:48.792 "disable_auto_failback": false, 00:04:48.792 "generate_uuids": false, 00:04:48.792 "transport_tos": 0, 00:04:48.792 "nvme_error_stat": false, 00:04:48.792 "rdma_srq_size": 0, 00:04:48.792 "io_path_stat": false, 00:04:48.792 "allow_accel_sequence": false, 00:04:48.792 "rdma_max_cq_size": 0, 00:04:48.792 "rdma_cm_event_timeout_ms": 0, 00:04:48.792 "dhchap_digests": [ 00:04:48.792 "sha256", 00:04:48.792 "sha384", 00:04:48.792 "sha512" 00:04:48.792 ], 00:04:48.792 "dhchap_dhgroups": [ 00:04:48.792 "null", 00:04:48.792 "ffdhe2048", 00:04:48.792 "ffdhe3072", 00:04:48.792 "ffdhe4096", 00:04:48.792 "ffdhe6144", 00:04:48.792 "ffdhe8192" 00:04:48.792 ] 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "bdev_nvme_set_hotplug", 00:04:48.792 "params": { 00:04:48.792 "period_us": 100000, 00:04:48.792 "enable": false 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "bdev_wait_for_examine" 00:04:48.792 } 00:04:48.792 ] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "scsi", 00:04:48.792 "config": null 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "scheduler", 00:04:48.792 "config": [ 00:04:48.792 { 00:04:48.792 "method": "framework_set_scheduler", 00:04:48.792 "params": { 00:04:48.792 "name": "static" 00:04:48.792 } 00:04:48.792 } 00:04:48.792 ] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "vhost_scsi", 00:04:48.792 "config": [] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "vhost_blk", 00:04:48.792 "config": [] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "ublk", 00:04:48.792 "config": [] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "nbd", 00:04:48.792 "config": [] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "nvmf", 00:04:48.792 "config": [ 00:04:48.792 { 00:04:48.792 "method": "nvmf_set_config", 00:04:48.792 "params": { 00:04:48.792 "discovery_filter": "match_any", 00:04:48.792 "admin_cmd_passthru": { 00:04:48.792 "identify_ctrlr": false 00:04:48.792 }, 00:04:48.792 "dhchap_digests": [ 00:04:48.792 "sha256", 00:04:48.792 "sha384", 00:04:48.792 "sha512" 00:04:48.792 ], 00:04:48.792 "dhchap_dhgroups": [ 00:04:48.792 "null", 00:04:48.792 "ffdhe2048", 00:04:48.792 "ffdhe3072", 00:04:48.792 "ffdhe4096", 00:04:48.792 "ffdhe6144", 00:04:48.792 "ffdhe8192" 00:04:48.792 ] 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "nvmf_set_max_subsystems", 00:04:48.792 "params": { 00:04:48.792 "max_subsystems": 1024 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "nvmf_set_crdt", 00:04:48.792 "params": { 00:04:48.792 "crdt1": 0, 00:04:48.792 "crdt2": 0, 00:04:48.792 "crdt3": 0 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "nvmf_create_transport", 00:04:48.792 "params": { 00:04:48.792 "trtype": "TCP", 00:04:48.792 "max_queue_depth": 128, 00:04:48.792 "max_io_qpairs_per_ctrlr": 127, 00:04:48.792 "in_capsule_data_size": 4096, 00:04:48.792 "max_io_size": 131072, 00:04:48.792 "io_unit_size": 131072, 00:04:48.792 "max_aq_depth": 128, 00:04:48.792 "num_shared_buffers": 511, 00:04:48.792 "buf_cache_size": 4294967295, 00:04:48.792 "dif_insert_or_strip": false, 00:04:48.792 "zcopy": false, 00:04:48.792 "c2h_success": true, 00:04:48.792 "sock_priority": 0, 00:04:48.792 "abort_timeout_sec": 1, 00:04:48.792 "ack_timeout": 0, 00:04:48.792 "data_wr_pool_size": 0 00:04:48.792 } 00:04:48.792 } 00:04:48.792 ] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "iscsi", 00:04:48.792 "config": [ 00:04:48.792 { 00:04:48.792 "method": "iscsi_set_options", 00:04:48.792 "params": { 00:04:48.792 "node_base": "iqn.2016-06.io.spdk", 00:04:48.792 "max_sessions": 128, 00:04:48.792 "max_connections_per_session": 2, 00:04:48.792 "max_queue_depth": 64, 00:04:48.792 "default_time2wait": 2, 00:04:48.792 "default_time2retain": 20, 00:04:48.792 "first_burst_length": 8192, 00:04:48.792 "immediate_data": true, 00:04:48.792 "allow_duplicated_isid": false, 00:04:48.792 "error_recovery_level": 0, 00:04:48.792 "nop_timeout": 60, 00:04:48.792 "nop_in_interval": 30, 00:04:48.792 "disable_chap": false, 00:04:48.792 "require_chap": false, 00:04:48.792 "mutual_chap": false, 00:04:48.792 "chap_group": 0, 00:04:48.792 "max_large_datain_per_connection": 64, 00:04:48.792 "max_r2t_per_connection": 4, 00:04:48.792 "pdu_pool_size": 36864, 00:04:48.792 "immediate_data_pool_size": 16384, 00:04:48.792 "data_out_pool_size": 2048 00:04:48.792 } 00:04:48.792 } 00:04:48.792 ] 00:04:48.792 } 00:04:48.792 ] 00:04:48.792 } 00:04:48.792 01:17:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:48.792 01:17:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57731 00:04:48.792 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57731 ']' 00:04:48.792 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57731 00:04:48.792 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:48.792 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.792 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57731 00:04:48.792 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:48.792 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:48.792 killing process with pid 57731 00:04:48.792 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57731' 00:04:48.792 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57731 00:04:48.792 01:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57731 00:04:50.696 01:17:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57782 00:04:50.696 01:17:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:50.696 01:17:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:55.967 01:17:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57782 00:04:55.967 01:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57782 ']' 00:04:55.967 01:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57782 00:04:55.967 01:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:55.967 01:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:55.967 01:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57782 00:04:55.967 01:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:55.967 killing process with pid 57782 00:04:55.967 01:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:55.967 01:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57782' 00:04:55.967 01:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57782 00:04:55.967 01:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57782 00:04:57.873 01:17:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:57.873 01:17:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:57.873 00:04:57.873 real 0m10.129s 00:04:57.873 user 0m9.802s 00:04:57.873 sys 0m0.729s 00:04:57.873 01:17:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.873 01:17:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.873 ************************************ 00:04:57.873 END TEST skip_rpc_with_json 00:04:57.873 ************************************ 00:04:57.873 01:17:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:57.873 01:17:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.873 01:17:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.873 01:17:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.873 ************************************ 00:04:57.873 START TEST skip_rpc_with_delay 00:04:57.873 ************************************ 00:04:57.873 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:57.873 01:17:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.873 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:57.874 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.874 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.874 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.874 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.874 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.874 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.874 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.874 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.874 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:57.874 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.874 [2024-09-28 01:17:53.670636] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:57.874 [2024-09-28 01:17:53.671515] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:57.874 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:57.874 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:57.874 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:57.874 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:57.874 00:04:57.874 real 0m0.210s 00:04:57.874 user 0m0.115s 00:04:57.874 sys 0m0.092s 00:04:57.874 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.874 01:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:57.874 ************************************ 00:04:57.874 END TEST skip_rpc_with_delay 00:04:57.874 ************************************ 00:04:57.874 01:17:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:57.874 01:17:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:57.874 01:17:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:57.874 01:17:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.874 01:17:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.874 01:17:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.874 ************************************ 00:04:57.874 START TEST exit_on_failed_rpc_init 00:04:57.874 ************************************ 00:04:57.874 01:17:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:57.874 01:17:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57910 00:04:57.874 01:17:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57910 00:04:57.874 01:17:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.874 01:17:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57910 ']' 00:04:57.874 01:17:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.874 01:17:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.874 01:17:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.874 01:17:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.874 01:17:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.133 [2024-09-28 01:17:53.930599] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:58.133 [2024-09-28 01:17:53.930781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57910 ] 00:04:58.392 [2024-09-28 01:17:54.091320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.392 [2024-09-28 01:17:54.239236] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.651 [2024-09-28 01:17:54.427300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:59.221 01:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.221 01:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:59.221 01:17:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.221 01:17:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.221 01:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:59.221 01:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.221 01:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.221 01:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.221 01:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.221 01:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.221 01:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.221 01:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.221 01:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.221 01:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:59.221 01:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.221 [2024-09-28 01:17:55.016592] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:59.221 [2024-09-28 01:17:55.016780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57928 ] 00:04:59.480 [2024-09-28 01:17:55.191860] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.480 [2024-09-28 01:17:55.396320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.480 [2024-09-28 01:17:55.396439] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:59.480 [2024-09-28 01:17:55.396509] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:59.480 [2024-09-28 01:17:55.396529] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57910 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57910 ']' 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57910 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57910 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:00.048 killing process with pid 57910 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57910' 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57910 00:05:00.048 01:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57910 00:05:01.954 00:05:01.954 real 0m3.880s 00:05:01.954 user 0m4.516s 00:05:01.954 sys 0m0.538s 00:05:01.954 01:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.954 01:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.954 ************************************ 00:05:01.954 END TEST exit_on_failed_rpc_init 00:05:01.954 ************************************ 00:05:01.954 01:17:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:01.954 00:05:01.954 real 0m21.488s 00:05:01.954 user 0m21.066s 00:05:01.954 sys 0m1.885s 00:05:01.954 01:17:57 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.954 01:17:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.954 ************************************ 00:05:01.954 END TEST skip_rpc 00:05:01.954 ************************************ 00:05:01.954 01:17:57 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:01.954 01:17:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.954 01:17:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.954 01:17:57 -- common/autotest_common.sh@10 -- # set +x 00:05:01.954 ************************************ 00:05:01.954 START TEST rpc_client 00:05:01.954 ************************************ 00:05:01.954 01:17:57 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:01.954 * Looking for test storage... 00:05:01.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:01.954 01:17:57 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:01.954 01:17:57 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:01.954 01:17:57 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:02.213 01:17:57 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:02.213 01:17:57 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.213 01:17:57 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.213 01:17:57 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.213 01:17:57 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.213 01:17:57 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.213 01:17:57 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.213 01:17:57 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.213 01:17:57 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.214 01:17:57 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:02.214 01:17:57 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.214 01:17:57 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:02.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.214 --rc genhtml_branch_coverage=1 00:05:02.214 --rc genhtml_function_coverage=1 00:05:02.214 --rc genhtml_legend=1 00:05:02.214 --rc geninfo_all_blocks=1 00:05:02.214 --rc geninfo_unexecuted_blocks=1 00:05:02.214 00:05:02.214 ' 00:05:02.214 01:17:57 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:02.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.214 --rc genhtml_branch_coverage=1 00:05:02.214 --rc genhtml_function_coverage=1 00:05:02.214 --rc genhtml_legend=1 00:05:02.214 --rc geninfo_all_blocks=1 00:05:02.214 --rc geninfo_unexecuted_blocks=1 00:05:02.214 00:05:02.214 ' 00:05:02.214 01:17:57 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:02.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.214 --rc genhtml_branch_coverage=1 00:05:02.214 --rc genhtml_function_coverage=1 00:05:02.214 --rc genhtml_legend=1 00:05:02.214 --rc geninfo_all_blocks=1 00:05:02.214 --rc geninfo_unexecuted_blocks=1 00:05:02.214 00:05:02.214 ' 00:05:02.214 01:17:57 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:02.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.214 --rc genhtml_branch_coverage=1 00:05:02.214 --rc genhtml_function_coverage=1 00:05:02.214 --rc genhtml_legend=1 00:05:02.214 --rc geninfo_all_blocks=1 00:05:02.214 --rc geninfo_unexecuted_blocks=1 00:05:02.214 00:05:02.214 ' 00:05:02.214 01:17:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:02.214 OK 00:05:02.214 01:17:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:02.214 00:05:02.214 real 0m0.221s 00:05:02.214 user 0m0.129s 00:05:02.214 sys 0m0.102s 00:05:02.214 01:17:57 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.214 ************************************ 00:05:02.214 END TEST rpc_client 00:05:02.214 ************************************ 00:05:02.214 01:17:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:02.214 01:17:58 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:02.214 01:17:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.214 01:17:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.214 01:17:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.214 ************************************ 00:05:02.214 START TEST json_config 00:05:02.214 ************************************ 00:05:02.214 01:17:58 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:02.214 01:17:58 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:02.214 01:17:58 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:02.214 01:17:58 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:02.474 01:17:58 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:02.474 01:17:58 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.474 01:17:58 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.474 01:17:58 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.474 01:17:58 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.474 01:17:58 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.474 01:17:58 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.474 01:17:58 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.474 01:17:58 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.474 01:17:58 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.474 01:17:58 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.474 01:17:58 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.474 01:17:58 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:02.474 01:17:58 json_config -- scripts/common.sh@345 -- # : 1 00:05:02.474 01:17:58 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.474 01:17:58 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.474 01:17:58 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:02.474 01:17:58 json_config -- scripts/common.sh@353 -- # local d=1 00:05:02.474 01:17:58 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.474 01:17:58 json_config -- scripts/common.sh@355 -- # echo 1 00:05:02.474 01:17:58 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.474 01:17:58 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:02.474 01:17:58 json_config -- scripts/common.sh@353 -- # local d=2 00:05:02.474 01:17:58 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.474 01:17:58 json_config -- scripts/common.sh@355 -- # echo 2 00:05:02.474 01:17:58 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.474 01:17:58 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.474 01:17:58 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.474 01:17:58 json_config -- scripts/common.sh@368 -- # return 0 00:05:02.474 01:17:58 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.474 01:17:58 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:02.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.474 --rc genhtml_branch_coverage=1 00:05:02.474 --rc genhtml_function_coverage=1 00:05:02.474 --rc genhtml_legend=1 00:05:02.474 --rc geninfo_all_blocks=1 00:05:02.474 --rc geninfo_unexecuted_blocks=1 00:05:02.474 00:05:02.474 ' 00:05:02.474 01:17:58 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:02.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.474 --rc genhtml_branch_coverage=1 00:05:02.474 --rc genhtml_function_coverage=1 00:05:02.474 --rc genhtml_legend=1 00:05:02.474 --rc geninfo_all_blocks=1 00:05:02.474 --rc geninfo_unexecuted_blocks=1 00:05:02.474 00:05:02.474 ' 00:05:02.474 01:17:58 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:02.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.474 --rc genhtml_branch_coverage=1 00:05:02.474 --rc genhtml_function_coverage=1 00:05:02.474 --rc genhtml_legend=1 00:05:02.474 --rc geninfo_all_blocks=1 00:05:02.474 --rc geninfo_unexecuted_blocks=1 00:05:02.474 00:05:02.474 ' 00:05:02.474 01:17:58 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:02.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.474 --rc genhtml_branch_coverage=1 00:05:02.474 --rc genhtml_function_coverage=1 00:05:02.474 --rc genhtml_legend=1 00:05:02.474 --rc geninfo_all_blocks=1 00:05:02.474 --rc geninfo_unexecuted_blocks=1 00:05:02.474 00:05:02.474 ' 00:05:02.474 01:17:58 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.474 01:17:58 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:02.474 01:17:58 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.474 01:17:58 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.474 01:17:58 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.475 01:17:58 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.475 01:17:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.475 01:17:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.475 01:17:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.475 01:17:58 json_config -- paths/export.sh@5 -- # export PATH 00:05:02.475 01:17:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.475 01:17:58 json_config -- nvmf/common.sh@51 -- # : 0 00:05:02.475 01:17:58 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.475 01:17:58 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.475 01:17:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.475 01:17:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.475 01:17:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.475 01:17:58 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.475 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.475 01:17:58 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.475 01:17:58 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.475 01:17:58 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:02.475 INFO: JSON configuration test init 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:02.475 01:17:58 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.475 01:17:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:02.475 01:17:58 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.475 01:17:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.475 01:17:58 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:02.475 01:17:58 json_config -- json_config/common.sh@9 -- # local app=target 00:05:02.475 01:17:58 json_config -- json_config/common.sh@10 -- # shift 00:05:02.475 01:17:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:02.475 01:17:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:02.475 01:17:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:02.475 01:17:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.475 01:17:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.475 01:17:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58087 00:05:02.475 01:17:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:02.475 Waiting for target to run... 00:05:02.475 01:17:58 json_config -- json_config/common.sh@25 -- # waitforlisten 58087 /var/tmp/spdk_tgt.sock 00:05:02.475 01:17:58 json_config -- common/autotest_common.sh@831 -- # '[' -z 58087 ']' 00:05:02.475 01:17:58 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:02.475 01:17:58 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:02.475 01:17:58 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:02.475 01:17:58 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:02.475 01:17:58 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.475 01:17:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.475 [2024-09-28 01:17:58.332949] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:02.475 [2024-09-28 01:17:58.333089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58087 ] 00:05:02.735 [2024-09-28 01:17:58.643568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.994 [2024-09-28 01:17:58.790928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.563 01:17:59 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:03.563 01:17:59 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:03.563 00:05:03.563 01:17:59 json_config -- json_config/common.sh@26 -- # echo '' 00:05:03.563 01:17:59 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:03.563 01:17:59 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:03.563 01:17:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:03.563 01:17:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.563 01:17:59 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:03.563 01:17:59 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:03.563 01:17:59 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:03.563 01:17:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.563 01:17:59 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:03.563 01:17:59 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:03.563 01:17:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:04.131 [2024-09-28 01:17:59.758842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:04.391 01:18:00 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:04.391 01:18:00 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:04.391 01:18:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:04.391 01:18:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.391 01:18:00 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:04.391 01:18:00 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:04.391 01:18:00 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:04.391 01:18:00 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:04.391 01:18:00 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:04.391 01:18:00 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:04.391 01:18:00 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:04.391 01:18:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:04.650 01:18:00 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:04.650 01:18:00 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:04.650 01:18:00 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:04.650 01:18:00 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:04.650 01:18:00 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:04.650 01:18:00 json_config -- json_config/json_config.sh@54 -- # sort 00:05:04.650 01:18:00 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:04.650 01:18:00 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:04.650 01:18:00 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:04.650 01:18:00 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:04.650 01:18:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.650 01:18:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.909 01:18:00 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:04.909 01:18:00 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:04.909 01:18:00 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:04.909 01:18:00 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:04.909 01:18:00 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:04.909 01:18:00 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:04.909 01:18:00 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:04.909 01:18:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:04.909 01:18:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.909 01:18:00 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:04.909 01:18:00 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:04.909 01:18:00 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:04.909 01:18:00 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:04.909 01:18:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:05.168 MallocForNvmf0 00:05:05.168 01:18:00 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:05.168 01:18:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:05.427 MallocForNvmf1 00:05:05.427 01:18:01 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:05.427 01:18:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:05.427 [2024-09-28 01:18:01.353685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:05.685 01:18:01 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:05.685 01:18:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:05.943 01:18:01 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:05.943 01:18:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:05.943 01:18:01 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:05.943 01:18:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:06.202 01:18:02 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:06.202 01:18:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:06.461 [2024-09-28 01:18:02.338705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:06.461 01:18:02 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:06.461 01:18:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.461 01:18:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.720 01:18:02 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:06.720 01:18:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.720 01:18:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.720 01:18:02 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:06.720 01:18:02 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:06.720 01:18:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:06.979 MallocBdevForConfigChangeCheck 00:05:06.979 01:18:02 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:06.979 01:18:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.979 01:18:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.979 01:18:02 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:06.979 01:18:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.237 INFO: shutting down applications... 00:05:07.237 01:18:03 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:07.237 01:18:03 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:07.237 01:18:03 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:07.237 01:18:03 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:07.496 01:18:03 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:07.755 Calling clear_iscsi_subsystem 00:05:07.755 Calling clear_nvmf_subsystem 00:05:07.755 Calling clear_nbd_subsystem 00:05:07.755 Calling clear_ublk_subsystem 00:05:07.755 Calling clear_vhost_blk_subsystem 00:05:07.755 Calling clear_vhost_scsi_subsystem 00:05:07.755 Calling clear_bdev_subsystem 00:05:07.755 01:18:03 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:07.755 01:18:03 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:07.755 01:18:03 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:07.755 01:18:03 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:07.755 01:18:03 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.755 01:18:03 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:08.015 01:18:03 json_config -- json_config/json_config.sh@352 -- # break 00:05:08.015 01:18:03 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:08.015 01:18:03 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:08.015 01:18:03 json_config -- json_config/common.sh@31 -- # local app=target 00:05:08.015 01:18:03 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:08.015 01:18:03 json_config -- json_config/common.sh@35 -- # [[ -n 58087 ]] 00:05:08.015 01:18:03 json_config -- json_config/common.sh@38 -- # kill -SIGINT 58087 00:05:08.015 01:18:03 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:08.015 01:18:03 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.015 01:18:03 json_config -- json_config/common.sh@41 -- # kill -0 58087 00:05:08.015 01:18:03 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.584 01:18:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.584 01:18:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.584 01:18:04 json_config -- json_config/common.sh@41 -- # kill -0 58087 00:05:08.584 01:18:04 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.185 01:18:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.185 01:18:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.185 01:18:04 json_config -- json_config/common.sh@41 -- # kill -0 58087 00:05:09.185 01:18:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:09.185 01:18:04 json_config -- json_config/common.sh@43 -- # break 00:05:09.185 SPDK target shutdown done 00:05:09.185 01:18:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:09.185 01:18:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:09.185 INFO: relaunching applications... 00:05:09.185 01:18:04 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:09.185 01:18:04 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.185 01:18:04 json_config -- json_config/common.sh@9 -- # local app=target 00:05:09.185 01:18:04 json_config -- json_config/common.sh@10 -- # shift 00:05:09.185 01:18:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.185 01:18:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.185 01:18:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.185 01:18:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.185 01:18:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.185 01:18:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58297 00:05:09.185 01:18:04 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.185 Waiting for target to run... 00:05:09.185 01:18:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.185 01:18:04 json_config -- json_config/common.sh@25 -- # waitforlisten 58297 /var/tmp/spdk_tgt.sock 00:05:09.185 01:18:04 json_config -- common/autotest_common.sh@831 -- # '[' -z 58297 ']' 00:05:09.185 01:18:04 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.185 01:18:04 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.185 01:18:04 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.185 01:18:04 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.185 01:18:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.186 [2024-09-28 01:18:05.034376] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:09.186 [2024-09-28 01:18:05.034571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58297 ] 00:05:09.450 [2024-09-28 01:18:05.336933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.708 [2024-09-28 01:18:05.538990] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.966 [2024-09-28 01:18:05.820238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:10.537 [2024-09-28 01:18:06.368540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.537 [2024-09-28 01:18:06.400692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:10.537 00:05:10.537 INFO: Checking if target configuration is the same... 00:05:10.537 01:18:06 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.537 01:18:06 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:10.537 01:18:06 json_config -- json_config/common.sh@26 -- # echo '' 00:05:10.537 01:18:06 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:10.537 01:18:06 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:10.537 01:18:06 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:10.537 01:18:06 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:10.537 01:18:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.537 + '[' 2 -ne 2 ']' 00:05:10.537 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:10.537 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:10.537 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:10.537 +++ basename /dev/fd/62 00:05:10.537 ++ mktemp /tmp/62.XXX 00:05:10.537 + tmp_file_1=/tmp/62.mFh 00:05:10.537 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:10.537 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:10.537 + tmp_file_2=/tmp/spdk_tgt_config.json.cJx 00:05:10.537 + ret=0 00:05:10.537 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:11.117 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:11.117 + diff -u /tmp/62.mFh /tmp/spdk_tgt_config.json.cJx 00:05:11.117 INFO: JSON config files are the same 00:05:11.117 + echo 'INFO: JSON config files are the same' 00:05:11.117 + rm /tmp/62.mFh /tmp/spdk_tgt_config.json.cJx 00:05:11.117 + exit 0 00:05:11.117 INFO: changing configuration and checking if this can be detected... 00:05:11.117 01:18:06 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:11.117 01:18:06 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:11.117 01:18:06 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:11.117 01:18:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:11.375 01:18:07 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:11.375 01:18:07 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:11.375 01:18:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.375 + '[' 2 -ne 2 ']' 00:05:11.375 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:11.375 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:11.375 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:11.375 +++ basename /dev/fd/62 00:05:11.375 ++ mktemp /tmp/62.XXX 00:05:11.375 + tmp_file_1=/tmp/62.RfW 00:05:11.375 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:11.375 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:11.375 + tmp_file_2=/tmp/spdk_tgt_config.json.7SY 00:05:11.375 + ret=0 00:05:11.375 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:11.943 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:11.943 + diff -u /tmp/62.RfW /tmp/spdk_tgt_config.json.7SY 00:05:11.943 + ret=1 00:05:11.943 + echo '=== Start of file: /tmp/62.RfW ===' 00:05:11.943 + cat /tmp/62.RfW 00:05:11.943 + echo '=== End of file: /tmp/62.RfW ===' 00:05:11.943 + echo '' 00:05:11.943 + echo '=== Start of file: /tmp/spdk_tgt_config.json.7SY ===' 00:05:11.943 + cat /tmp/spdk_tgt_config.json.7SY 00:05:11.943 + echo '=== End of file: /tmp/spdk_tgt_config.json.7SY ===' 00:05:11.943 + echo '' 00:05:11.943 + rm /tmp/62.RfW /tmp/spdk_tgt_config.json.7SY 00:05:11.943 + exit 1 00:05:11.943 INFO: configuration change detected. 00:05:11.943 01:18:07 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:11.943 01:18:07 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:11.943 01:18:07 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:11.943 01:18:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.943 01:18:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.943 01:18:07 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:11.943 01:18:07 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:11.943 01:18:07 json_config -- json_config/json_config.sh@324 -- # [[ -n 58297 ]] 00:05:11.943 01:18:07 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:11.943 01:18:07 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:11.943 01:18:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.943 01:18:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.943 01:18:07 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:11.943 01:18:07 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:11.943 01:18:07 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:11.943 01:18:07 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:11.943 01:18:07 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:11.943 01:18:07 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:11.943 01:18:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.943 01:18:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.943 01:18:07 json_config -- json_config/json_config.sh@330 -- # killprocess 58297 00:05:11.943 01:18:07 json_config -- common/autotest_common.sh@950 -- # '[' -z 58297 ']' 00:05:11.943 01:18:07 json_config -- common/autotest_common.sh@954 -- # kill -0 58297 00:05:11.943 01:18:07 json_config -- common/autotest_common.sh@955 -- # uname 00:05:11.943 01:18:07 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.943 01:18:07 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58297 00:05:11.943 killing process with pid 58297 00:05:11.943 01:18:07 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.943 01:18:07 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.943 01:18:07 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58297' 00:05:11.943 01:18:07 json_config -- common/autotest_common.sh@969 -- # kill 58297 00:05:11.943 01:18:07 json_config -- common/autotest_common.sh@974 -- # wait 58297 00:05:12.879 01:18:08 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:12.879 01:18:08 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:12.879 01:18:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:12.879 01:18:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.879 INFO: Success 00:05:12.879 01:18:08 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:12.879 01:18:08 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:12.879 00:05:12.879 real 0m10.553s 00:05:12.879 user 0m14.224s 00:05:12.879 sys 0m1.657s 00:05:12.879 ************************************ 00:05:12.879 END TEST json_config 00:05:12.879 ************************************ 00:05:12.879 01:18:08 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.879 01:18:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.879 01:18:08 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:12.879 01:18:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.879 01:18:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.879 01:18:08 -- common/autotest_common.sh@10 -- # set +x 00:05:12.879 ************************************ 00:05:12.879 START TEST json_config_extra_key 00:05:12.879 ************************************ 00:05:12.879 01:18:08 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:12.879 01:18:08 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:12.879 01:18:08 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:12.879 01:18:08 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:12.879 01:18:08 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:12.879 01:18:08 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.879 01:18:08 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.879 01:18:08 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.879 01:18:08 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.879 01:18:08 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.879 01:18:08 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.879 01:18:08 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.879 01:18:08 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.879 01:18:08 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.879 01:18:08 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.879 01:18:08 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.880 01:18:08 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:12.880 01:18:08 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:12.880 01:18:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.880 01:18:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.880 01:18:08 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:12.880 01:18:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:12.880 01:18:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.880 01:18:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:12.880 01:18:08 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.880 01:18:08 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:12.880 01:18:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:12.880 01:18:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.880 01:18:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:12.880 01:18:08 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.880 01:18:08 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.880 01:18:08 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.880 01:18:08 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:12.880 01:18:08 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.880 01:18:08 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:12.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.880 --rc genhtml_branch_coverage=1 00:05:12.880 --rc genhtml_function_coverage=1 00:05:12.880 --rc genhtml_legend=1 00:05:12.880 --rc geninfo_all_blocks=1 00:05:12.880 --rc geninfo_unexecuted_blocks=1 00:05:12.880 00:05:12.880 ' 00:05:12.880 01:18:08 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:12.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.880 --rc genhtml_branch_coverage=1 00:05:12.880 --rc genhtml_function_coverage=1 00:05:12.880 --rc genhtml_legend=1 00:05:12.880 --rc geninfo_all_blocks=1 00:05:12.880 --rc geninfo_unexecuted_blocks=1 00:05:12.880 00:05:12.880 ' 00:05:12.880 01:18:08 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:12.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.880 --rc genhtml_branch_coverage=1 00:05:12.880 --rc genhtml_function_coverage=1 00:05:12.880 --rc genhtml_legend=1 00:05:12.880 --rc geninfo_all_blocks=1 00:05:12.880 --rc geninfo_unexecuted_blocks=1 00:05:12.880 00:05:12.880 ' 00:05:12.880 01:18:08 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:12.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.880 --rc genhtml_branch_coverage=1 00:05:12.880 --rc genhtml_function_coverage=1 00:05:12.880 --rc genhtml_legend=1 00:05:12.880 --rc geninfo_all_blocks=1 00:05:12.880 --rc geninfo_unexecuted_blocks=1 00:05:12.880 00:05:12.880 ' 00:05:12.880 01:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:12.880 01:18:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:12.880 01:18:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.880 01:18:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.880 01:18:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.880 01:18:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.880 01:18:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.880 01:18:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.880 01:18:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.880 01:18:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.880 01:18:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.880 01:18:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.139 01:18:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:05:13.139 01:18:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:05:13.139 01:18:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.139 01:18:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.139 01:18:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:13.139 01:18:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.139 01:18:08 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:13.139 01:18:08 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.139 01:18:08 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.139 01:18:08 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.139 01:18:08 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.139 01:18:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.139 01:18:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.139 01:18:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.139 01:18:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:13.139 01:18:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.139 01:18:08 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:13.139 01:18:08 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.139 01:18:08 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.139 01:18:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.139 01:18:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.139 01:18:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.139 01:18:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.139 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.139 01:18:08 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.139 01:18:08 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.139 01:18:08 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.139 01:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:13.139 01:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:13.139 01:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:13.139 01:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:13.139 01:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:13.139 01:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:13.139 01:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:13.139 01:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:13.139 01:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:13.139 01:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:13.139 INFO: launching applications... 00:05:13.139 01:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:13.140 01:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:13.140 01:18:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:13.140 01:18:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:13.140 01:18:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.140 01:18:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.140 01:18:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.140 01:18:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.140 01:18:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.140 01:18:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58456 00:05:13.140 Waiting for target to run... 00:05:13.140 01:18:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.140 01:18:08 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:13.140 01:18:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58456 /var/tmp/spdk_tgt.sock 00:05:13.140 01:18:08 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 58456 ']' 00:05:13.140 01:18:08 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.140 01:18:08 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.140 01:18:08 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.140 01:18:08 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.140 01:18:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:13.140 [2024-09-28 01:18:08.928527] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:13.140 [2024-09-28 01:18:08.928674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58456 ] 00:05:13.399 [2024-09-28 01:18:09.232210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.657 [2024-09-28 01:18:09.367392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.657 [2024-09-28 01:18:09.540502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:14.225 01:18:09 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.225 01:18:09 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:14.225 00:05:14.225 01:18:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:14.225 INFO: shutting down applications... 00:05:14.225 01:18:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:14.225 01:18:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:14.225 01:18:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:14.225 01:18:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:14.225 01:18:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58456 ]] 00:05:14.225 01:18:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58456 00:05:14.225 01:18:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:14.225 01:18:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.225 01:18:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58456 00:05:14.225 01:18:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.792 01:18:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.792 01:18:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.792 01:18:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58456 00:05:14.792 01:18:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.051 01:18:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.051 01:18:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.051 01:18:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58456 00:05:15.051 01:18:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.619 01:18:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.619 01:18:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.619 01:18:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58456 00:05:15.619 01:18:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.188 01:18:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.188 01:18:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.188 01:18:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58456 00:05:16.188 01:18:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.756 01:18:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.756 01:18:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.756 01:18:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58456 00:05:16.756 01:18:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:16.756 01:18:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:16.756 01:18:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:16.756 01:18:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:16.756 SPDK target shutdown done 00:05:16.756 Success 00:05:16.756 01:18:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:16.756 00:05:16.756 real 0m3.797s 00:05:16.756 user 0m3.424s 00:05:16.756 sys 0m0.441s 00:05:16.756 01:18:12 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.756 01:18:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:16.756 ************************************ 00:05:16.756 END TEST json_config_extra_key 00:05:16.756 ************************************ 00:05:16.756 01:18:12 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.756 01:18:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.756 01:18:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.756 01:18:12 -- common/autotest_common.sh@10 -- # set +x 00:05:16.756 ************************************ 00:05:16.756 START TEST alias_rpc 00:05:16.756 ************************************ 00:05:16.756 01:18:12 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.756 * Looking for test storage... 00:05:16.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:16.756 01:18:12 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:16.756 01:18:12 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:16.756 01:18:12 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:16.756 01:18:12 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.756 01:18:12 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:16.756 01:18:12 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.756 01:18:12 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:16.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.756 --rc genhtml_branch_coverage=1 00:05:16.756 --rc genhtml_function_coverage=1 00:05:16.756 --rc genhtml_legend=1 00:05:16.756 --rc geninfo_all_blocks=1 00:05:16.756 --rc geninfo_unexecuted_blocks=1 00:05:16.756 00:05:16.756 ' 00:05:16.756 01:18:12 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:16.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.756 --rc genhtml_branch_coverage=1 00:05:16.756 --rc genhtml_function_coverage=1 00:05:16.756 --rc genhtml_legend=1 00:05:16.756 --rc geninfo_all_blocks=1 00:05:16.756 --rc geninfo_unexecuted_blocks=1 00:05:16.756 00:05:16.756 ' 00:05:16.756 01:18:12 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:16.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.756 --rc genhtml_branch_coverage=1 00:05:16.756 --rc genhtml_function_coverage=1 00:05:16.756 --rc genhtml_legend=1 00:05:16.756 --rc geninfo_all_blocks=1 00:05:16.756 --rc geninfo_unexecuted_blocks=1 00:05:16.756 00:05:16.756 ' 00:05:16.756 01:18:12 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:16.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.756 --rc genhtml_branch_coverage=1 00:05:16.756 --rc genhtml_function_coverage=1 00:05:16.756 --rc genhtml_legend=1 00:05:16.756 --rc geninfo_all_blocks=1 00:05:16.756 --rc geninfo_unexecuted_blocks=1 00:05:16.756 00:05:16.756 ' 00:05:16.756 01:18:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:16.756 01:18:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58561 00:05:16.756 01:18:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.756 01:18:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58561 00:05:16.756 01:18:12 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 58561 ']' 00:05:16.756 01:18:12 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.756 01:18:12 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.757 01:18:12 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.757 01:18:12 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.757 01:18:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.016 [2024-09-28 01:18:12.787046] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:17.016 [2024-09-28 01:18:12.787235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58561 ] 00:05:17.275 [2024-09-28 01:18:12.957496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.275 [2024-09-28 01:18:13.131409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.534 [2024-09-28 01:18:13.312389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:18.103 01:18:13 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.103 01:18:13 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:18.104 01:18:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:18.104 01:18:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58561 00:05:18.104 01:18:14 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 58561 ']' 00:05:18.104 01:18:14 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 58561 00:05:18.104 01:18:14 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:18.104 01:18:14 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.104 01:18:14 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58561 00:05:18.363 01:18:14 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.363 01:18:14 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.363 killing process with pid 58561 00:05:18.363 01:18:14 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58561' 00:05:18.363 01:18:14 alias_rpc -- common/autotest_common.sh@969 -- # kill 58561 00:05:18.363 01:18:14 alias_rpc -- common/autotest_common.sh@974 -- # wait 58561 00:05:20.268 00:05:20.268 real 0m3.492s 00:05:20.268 user 0m3.718s 00:05:20.268 sys 0m0.459s 00:05:20.268 01:18:15 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.268 01:18:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.268 ************************************ 00:05:20.268 END TEST alias_rpc 00:05:20.268 ************************************ 00:05:20.268 01:18:16 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:20.268 01:18:16 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:20.268 01:18:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.268 01:18:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.268 01:18:16 -- common/autotest_common.sh@10 -- # set +x 00:05:20.268 ************************************ 00:05:20.268 START TEST spdkcli_tcp 00:05:20.268 ************************************ 00:05:20.268 01:18:16 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:20.268 * Looking for test storage... 00:05:20.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:20.268 01:18:16 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:20.268 01:18:16 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:20.268 01:18:16 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:20.527 01:18:16 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.527 01:18:16 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:20.527 01:18:16 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.527 01:18:16 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:20.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.527 --rc genhtml_branch_coverage=1 00:05:20.527 --rc genhtml_function_coverage=1 00:05:20.527 --rc genhtml_legend=1 00:05:20.527 --rc geninfo_all_blocks=1 00:05:20.527 --rc geninfo_unexecuted_blocks=1 00:05:20.527 00:05:20.527 ' 00:05:20.527 01:18:16 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:20.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.527 --rc genhtml_branch_coverage=1 00:05:20.527 --rc genhtml_function_coverage=1 00:05:20.527 --rc genhtml_legend=1 00:05:20.527 --rc geninfo_all_blocks=1 00:05:20.527 --rc geninfo_unexecuted_blocks=1 00:05:20.527 00:05:20.527 ' 00:05:20.527 01:18:16 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:20.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.527 --rc genhtml_branch_coverage=1 00:05:20.527 --rc genhtml_function_coverage=1 00:05:20.527 --rc genhtml_legend=1 00:05:20.527 --rc geninfo_all_blocks=1 00:05:20.527 --rc geninfo_unexecuted_blocks=1 00:05:20.527 00:05:20.527 ' 00:05:20.527 01:18:16 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:20.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.527 --rc genhtml_branch_coverage=1 00:05:20.527 --rc genhtml_function_coverage=1 00:05:20.527 --rc genhtml_legend=1 00:05:20.527 --rc geninfo_all_blocks=1 00:05:20.527 --rc geninfo_unexecuted_blocks=1 00:05:20.527 00:05:20.527 ' 00:05:20.527 01:18:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:20.527 01:18:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:20.527 01:18:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:20.527 01:18:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:20.527 01:18:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:20.528 01:18:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:20.528 01:18:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:20.528 01:18:16 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:20.528 01:18:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.528 01:18:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58667 00:05:20.528 01:18:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58667 00:05:20.528 01:18:16 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 58667 ']' 00:05:20.528 01:18:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:20.528 01:18:16 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.528 01:18:16 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.528 01:18:16 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.528 01:18:16 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.528 01:18:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.528 [2024-09-28 01:18:16.359295] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:20.528 [2024-09-28 01:18:16.359494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58667 ] 00:05:20.786 [2024-09-28 01:18:16.529000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.786 [2024-09-28 01:18:16.688775] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.786 [2024-09-28 01:18:16.688790] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.046 [2024-09-28 01:18:16.891383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.613 01:18:17 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.613 01:18:17 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:21.613 01:18:17 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:21.613 01:18:17 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58685 00:05:21.613 01:18:17 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:21.873 [ 00:05:21.873 "bdev_malloc_delete", 00:05:21.873 "bdev_malloc_create", 00:05:21.873 "bdev_null_resize", 00:05:21.873 "bdev_null_delete", 00:05:21.873 "bdev_null_create", 00:05:21.873 "bdev_nvme_cuse_unregister", 00:05:21.873 "bdev_nvme_cuse_register", 00:05:21.873 "bdev_opal_new_user", 00:05:21.873 "bdev_opal_set_lock_state", 00:05:21.873 "bdev_opal_delete", 00:05:21.873 "bdev_opal_get_info", 00:05:21.873 "bdev_opal_create", 00:05:21.873 "bdev_nvme_opal_revert", 00:05:21.873 "bdev_nvme_opal_init", 00:05:21.873 "bdev_nvme_send_cmd", 00:05:21.873 "bdev_nvme_set_keys", 00:05:21.873 "bdev_nvme_get_path_iostat", 00:05:21.873 "bdev_nvme_get_mdns_discovery_info", 00:05:21.873 "bdev_nvme_stop_mdns_discovery", 00:05:21.873 "bdev_nvme_start_mdns_discovery", 00:05:21.873 "bdev_nvme_set_multipath_policy", 00:05:21.873 "bdev_nvme_set_preferred_path", 00:05:21.873 "bdev_nvme_get_io_paths", 00:05:21.873 "bdev_nvme_remove_error_injection", 00:05:21.873 "bdev_nvme_add_error_injection", 00:05:21.873 "bdev_nvme_get_discovery_info", 00:05:21.873 "bdev_nvme_stop_discovery", 00:05:21.873 "bdev_nvme_start_discovery", 00:05:21.873 "bdev_nvme_get_controller_health_info", 00:05:21.873 "bdev_nvme_disable_controller", 00:05:21.873 "bdev_nvme_enable_controller", 00:05:21.873 "bdev_nvme_reset_controller", 00:05:21.873 "bdev_nvme_get_transport_statistics", 00:05:21.873 "bdev_nvme_apply_firmware", 00:05:21.873 "bdev_nvme_detach_controller", 00:05:21.873 "bdev_nvme_get_controllers", 00:05:21.873 "bdev_nvme_attach_controller", 00:05:21.873 "bdev_nvme_set_hotplug", 00:05:21.873 "bdev_nvme_set_options", 00:05:21.873 "bdev_passthru_delete", 00:05:21.873 "bdev_passthru_create", 00:05:21.873 "bdev_lvol_set_parent_bdev", 00:05:21.873 "bdev_lvol_set_parent", 00:05:21.873 "bdev_lvol_check_shallow_copy", 00:05:21.873 "bdev_lvol_start_shallow_copy", 00:05:21.873 "bdev_lvol_grow_lvstore", 00:05:21.873 "bdev_lvol_get_lvols", 00:05:21.873 "bdev_lvol_get_lvstores", 00:05:21.873 "bdev_lvol_delete", 00:05:21.873 "bdev_lvol_set_read_only", 00:05:21.873 "bdev_lvol_resize", 00:05:21.873 "bdev_lvol_decouple_parent", 00:05:21.873 "bdev_lvol_inflate", 00:05:21.873 "bdev_lvol_rename", 00:05:21.873 "bdev_lvol_clone_bdev", 00:05:21.873 "bdev_lvol_clone", 00:05:21.873 "bdev_lvol_snapshot", 00:05:21.873 "bdev_lvol_create", 00:05:21.873 "bdev_lvol_delete_lvstore", 00:05:21.873 "bdev_lvol_rename_lvstore", 00:05:21.873 "bdev_lvol_create_lvstore", 00:05:21.873 "bdev_raid_set_options", 00:05:21.873 "bdev_raid_remove_base_bdev", 00:05:21.873 "bdev_raid_add_base_bdev", 00:05:21.873 "bdev_raid_delete", 00:05:21.873 "bdev_raid_create", 00:05:21.873 "bdev_raid_get_bdevs", 00:05:21.873 "bdev_error_inject_error", 00:05:21.873 "bdev_error_delete", 00:05:21.873 "bdev_error_create", 00:05:21.873 "bdev_split_delete", 00:05:21.873 "bdev_split_create", 00:05:21.874 "bdev_delay_delete", 00:05:21.874 "bdev_delay_create", 00:05:21.874 "bdev_delay_update_latency", 00:05:21.874 "bdev_zone_block_delete", 00:05:21.874 "bdev_zone_block_create", 00:05:21.874 "blobfs_create", 00:05:21.874 "blobfs_detect", 00:05:21.874 "blobfs_set_cache_size", 00:05:21.874 "bdev_aio_delete", 00:05:21.874 "bdev_aio_rescan", 00:05:21.874 "bdev_aio_create", 00:05:21.874 "bdev_ftl_set_property", 00:05:21.874 "bdev_ftl_get_properties", 00:05:21.874 "bdev_ftl_get_stats", 00:05:21.874 "bdev_ftl_unmap", 00:05:21.874 "bdev_ftl_unload", 00:05:21.874 "bdev_ftl_delete", 00:05:21.874 "bdev_ftl_load", 00:05:21.874 "bdev_ftl_create", 00:05:21.874 "bdev_virtio_attach_controller", 00:05:21.874 "bdev_virtio_scsi_get_devices", 00:05:21.874 "bdev_virtio_detach_controller", 00:05:21.874 "bdev_virtio_blk_set_hotplug", 00:05:21.874 "bdev_iscsi_delete", 00:05:21.874 "bdev_iscsi_create", 00:05:21.874 "bdev_iscsi_set_options", 00:05:21.874 "bdev_uring_delete", 00:05:21.874 "bdev_uring_rescan", 00:05:21.874 "bdev_uring_create", 00:05:21.874 "accel_error_inject_error", 00:05:21.874 "ioat_scan_accel_module", 00:05:21.874 "dsa_scan_accel_module", 00:05:21.874 "iaa_scan_accel_module", 00:05:21.874 "vfu_virtio_create_fs_endpoint", 00:05:21.874 "vfu_virtio_create_scsi_endpoint", 00:05:21.874 "vfu_virtio_scsi_remove_target", 00:05:21.874 "vfu_virtio_scsi_add_target", 00:05:21.874 "vfu_virtio_create_blk_endpoint", 00:05:21.874 "vfu_virtio_delete_endpoint", 00:05:21.874 "keyring_file_remove_key", 00:05:21.874 "keyring_file_add_key", 00:05:21.874 "keyring_linux_set_options", 00:05:21.874 "fsdev_aio_delete", 00:05:21.874 "fsdev_aio_create", 00:05:21.874 "iscsi_get_histogram", 00:05:21.874 "iscsi_enable_histogram", 00:05:21.874 "iscsi_set_options", 00:05:21.874 "iscsi_get_auth_groups", 00:05:21.874 "iscsi_auth_group_remove_secret", 00:05:21.874 "iscsi_auth_group_add_secret", 00:05:21.874 "iscsi_delete_auth_group", 00:05:21.874 "iscsi_create_auth_group", 00:05:21.874 "iscsi_set_discovery_auth", 00:05:21.874 "iscsi_get_options", 00:05:21.874 "iscsi_target_node_request_logout", 00:05:21.874 "iscsi_target_node_set_redirect", 00:05:21.874 "iscsi_target_node_set_auth", 00:05:21.874 "iscsi_target_node_add_lun", 00:05:21.874 "iscsi_get_stats", 00:05:21.874 "iscsi_get_connections", 00:05:21.874 "iscsi_portal_group_set_auth", 00:05:21.874 "iscsi_start_portal_group", 00:05:21.874 "iscsi_delete_portal_group", 00:05:21.874 "iscsi_create_portal_group", 00:05:21.874 "iscsi_get_portal_groups", 00:05:21.874 "iscsi_delete_target_node", 00:05:21.874 "iscsi_target_node_remove_pg_ig_maps", 00:05:21.874 "iscsi_target_node_add_pg_ig_maps", 00:05:21.874 "iscsi_create_target_node", 00:05:21.874 "iscsi_get_target_nodes", 00:05:21.874 "iscsi_delete_initiator_group", 00:05:21.874 "iscsi_initiator_group_remove_initiators", 00:05:21.874 "iscsi_initiator_group_add_initiators", 00:05:21.874 "iscsi_create_initiator_group", 00:05:21.874 "iscsi_get_initiator_groups", 00:05:21.874 "nvmf_set_crdt", 00:05:21.874 "nvmf_set_config", 00:05:21.874 "nvmf_set_max_subsystems", 00:05:21.874 "nvmf_stop_mdns_prr", 00:05:21.874 "nvmf_publish_mdns_prr", 00:05:21.874 "nvmf_subsystem_get_listeners", 00:05:21.874 "nvmf_subsystem_get_qpairs", 00:05:21.874 "nvmf_subsystem_get_controllers", 00:05:21.874 "nvmf_get_stats", 00:05:21.874 "nvmf_get_transports", 00:05:21.874 "nvmf_create_transport", 00:05:21.874 "nvmf_get_targets", 00:05:21.874 "nvmf_delete_target", 00:05:21.874 "nvmf_create_target", 00:05:21.874 "nvmf_subsystem_allow_any_host", 00:05:21.874 "nvmf_subsystem_set_keys", 00:05:21.874 "nvmf_subsystem_remove_host", 00:05:21.874 "nvmf_subsystem_add_host", 00:05:21.874 "nvmf_ns_remove_host", 00:05:21.874 "nvmf_ns_add_host", 00:05:21.874 "nvmf_subsystem_remove_ns", 00:05:21.874 "nvmf_subsystem_set_ns_ana_group", 00:05:21.874 "nvmf_subsystem_add_ns", 00:05:21.874 "nvmf_subsystem_listener_set_ana_state", 00:05:21.874 "nvmf_discovery_get_referrals", 00:05:21.874 "nvmf_discovery_remove_referral", 00:05:21.874 "nvmf_discovery_add_referral", 00:05:21.874 "nvmf_subsystem_remove_listener", 00:05:21.874 "nvmf_subsystem_add_listener", 00:05:21.874 "nvmf_delete_subsystem", 00:05:21.874 "nvmf_create_subsystem", 00:05:21.874 "nvmf_get_subsystems", 00:05:21.874 "env_dpdk_get_mem_stats", 00:05:21.874 "nbd_get_disks", 00:05:21.874 "nbd_stop_disk", 00:05:21.874 "nbd_start_disk", 00:05:21.874 "ublk_recover_disk", 00:05:21.874 "ublk_get_disks", 00:05:21.874 "ublk_stop_disk", 00:05:21.874 "ublk_start_disk", 00:05:21.874 "ublk_destroy_target", 00:05:21.874 "ublk_create_target", 00:05:21.874 "virtio_blk_create_transport", 00:05:21.874 "virtio_blk_get_transports", 00:05:21.874 "vhost_controller_set_coalescing", 00:05:21.874 "vhost_get_controllers", 00:05:21.874 "vhost_delete_controller", 00:05:21.874 "vhost_create_blk_controller", 00:05:21.874 "vhost_scsi_controller_remove_target", 00:05:21.874 "vhost_scsi_controller_add_target", 00:05:21.874 "vhost_start_scsi_controller", 00:05:21.874 "vhost_create_scsi_controller", 00:05:21.874 "thread_set_cpumask", 00:05:21.874 "scheduler_set_options", 00:05:21.874 "framework_get_governor", 00:05:21.874 "framework_get_scheduler", 00:05:21.874 "framework_set_scheduler", 00:05:21.874 "framework_get_reactors", 00:05:21.874 "thread_get_io_channels", 00:05:21.874 "thread_get_pollers", 00:05:21.874 "thread_get_stats", 00:05:21.874 "framework_monitor_context_switch", 00:05:21.874 "spdk_kill_instance", 00:05:21.874 "log_enable_timestamps", 00:05:21.874 "log_get_flags", 00:05:21.874 "log_clear_flag", 00:05:21.874 "log_set_flag", 00:05:21.874 "log_get_level", 00:05:21.874 "log_set_level", 00:05:21.874 "log_get_print_level", 00:05:21.874 "log_set_print_level", 00:05:21.874 "framework_enable_cpumask_locks", 00:05:21.874 "framework_disable_cpumask_locks", 00:05:21.874 "framework_wait_init", 00:05:21.874 "framework_start_init", 00:05:21.874 "scsi_get_devices", 00:05:21.874 "bdev_get_histogram", 00:05:21.874 "bdev_enable_histogram", 00:05:21.874 "bdev_set_qos_limit", 00:05:21.874 "bdev_set_qd_sampling_period", 00:05:21.874 "bdev_get_bdevs", 00:05:21.874 "bdev_reset_iostat", 00:05:21.874 "bdev_get_iostat", 00:05:21.874 "bdev_examine", 00:05:21.874 "bdev_wait_for_examine", 00:05:21.874 "bdev_set_options", 00:05:21.874 "accel_get_stats", 00:05:21.874 "accel_set_options", 00:05:21.874 "accel_set_driver", 00:05:21.874 "accel_crypto_key_destroy", 00:05:21.874 "accel_crypto_keys_get", 00:05:21.874 "accel_crypto_key_create", 00:05:21.874 "accel_assign_opc", 00:05:21.874 "accel_get_module_info", 00:05:21.874 "accel_get_opc_assignments", 00:05:21.874 "vmd_rescan", 00:05:21.874 "vmd_remove_device", 00:05:21.874 "vmd_enable", 00:05:21.874 "sock_get_default_impl", 00:05:21.874 "sock_set_default_impl", 00:05:21.874 "sock_impl_set_options", 00:05:21.874 "sock_impl_get_options", 00:05:21.874 "iobuf_get_stats", 00:05:21.874 "iobuf_set_options", 00:05:21.874 "keyring_get_keys", 00:05:21.874 "vfu_tgt_set_base_path", 00:05:21.874 "framework_get_pci_devices", 00:05:21.874 "framework_get_config", 00:05:21.874 "framework_get_subsystems", 00:05:21.874 "fsdev_set_opts", 00:05:21.874 "fsdev_get_opts", 00:05:21.874 "trace_get_info", 00:05:21.874 "trace_get_tpoint_group_mask", 00:05:21.874 "trace_disable_tpoint_group", 00:05:21.874 "trace_enable_tpoint_group", 00:05:21.874 "trace_clear_tpoint_mask", 00:05:21.874 "trace_set_tpoint_mask", 00:05:21.874 "notify_get_notifications", 00:05:21.874 "notify_get_types", 00:05:21.874 "spdk_get_version", 00:05:21.874 "rpc_get_methods" 00:05:21.874 ] 00:05:21.874 01:18:17 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:21.874 01:18:17 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:21.874 01:18:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.874 01:18:17 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:21.874 01:18:17 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58667 00:05:21.874 01:18:17 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 58667 ']' 00:05:21.874 01:18:17 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 58667 00:05:21.874 01:18:17 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:21.874 01:18:17 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:21.874 01:18:17 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58667 00:05:21.874 01:18:17 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:21.874 01:18:17 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:21.874 killing process with pid 58667 00:05:21.874 01:18:17 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58667' 00:05:21.874 01:18:17 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 58667 00:05:21.874 01:18:17 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 58667 00:05:24.409 00:05:24.409 real 0m3.708s 00:05:24.409 user 0m6.657s 00:05:24.409 sys 0m0.530s 00:05:24.409 01:18:19 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.409 01:18:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.409 ************************************ 00:05:24.409 END TEST spdkcli_tcp 00:05:24.409 ************************************ 00:05:24.410 01:18:19 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.410 01:18:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.410 01:18:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.410 01:18:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.410 ************************************ 00:05:24.410 START TEST dpdk_mem_utility 00:05:24.410 ************************************ 00:05:24.410 01:18:19 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.410 * Looking for test storage... 00:05:24.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:24.410 01:18:19 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:24.410 01:18:19 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:24.410 01:18:19 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:24.410 01:18:19 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.410 01:18:19 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:24.410 01:18:19 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.410 01:18:19 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:24.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.410 --rc genhtml_branch_coverage=1 00:05:24.410 --rc genhtml_function_coverage=1 00:05:24.410 --rc genhtml_legend=1 00:05:24.410 --rc geninfo_all_blocks=1 00:05:24.410 --rc geninfo_unexecuted_blocks=1 00:05:24.410 00:05:24.410 ' 00:05:24.410 01:18:19 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:24.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.410 --rc genhtml_branch_coverage=1 00:05:24.410 --rc genhtml_function_coverage=1 00:05:24.410 --rc genhtml_legend=1 00:05:24.410 --rc geninfo_all_blocks=1 00:05:24.410 --rc geninfo_unexecuted_blocks=1 00:05:24.410 00:05:24.410 ' 00:05:24.410 01:18:19 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:24.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.410 --rc genhtml_branch_coverage=1 00:05:24.410 --rc genhtml_function_coverage=1 00:05:24.410 --rc genhtml_legend=1 00:05:24.410 --rc geninfo_all_blocks=1 00:05:24.410 --rc geninfo_unexecuted_blocks=1 00:05:24.410 00:05:24.410 ' 00:05:24.410 01:18:19 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:24.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.410 --rc genhtml_branch_coverage=1 00:05:24.410 --rc genhtml_function_coverage=1 00:05:24.410 --rc genhtml_legend=1 00:05:24.410 --rc geninfo_all_blocks=1 00:05:24.410 --rc geninfo_unexecuted_blocks=1 00:05:24.410 00:05:24.410 ' 00:05:24.410 01:18:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:24.410 01:18:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58779 00:05:24.410 01:18:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.410 01:18:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58779 00:05:24.410 01:18:19 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58779 ']' 00:05:24.410 01:18:19 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.410 01:18:19 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.410 01:18:19 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.410 01:18:19 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.410 01:18:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.410 [2024-09-28 01:18:20.115473] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:24.410 [2024-09-28 01:18:20.115650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58779 ] 00:05:24.410 [2024-09-28 01:18:20.286243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.669 [2024-09-28 01:18:20.448486] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.929 [2024-09-28 01:18:20.631313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:25.189 01:18:21 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.189 01:18:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:25.189 01:18:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:25.189 01:18:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:25.189 01:18:21 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.189 01:18:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.189 { 00:05:25.189 "filename": "/tmp/spdk_mem_dump.txt" 00:05:25.189 } 00:05:25.189 01:18:21 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.189 01:18:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:25.448 DPDK memory size 866.000000 MiB in 1 heap(s) 00:05:25.448 1 heaps totaling size 866.000000 MiB 00:05:25.449 size: 866.000000 MiB heap id: 0 00:05:25.449 end heaps---------- 00:05:25.449 9 mempools totaling size 642.649841 MiB 00:05:25.449 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:25.449 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:25.449 size: 92.545471 MiB name: bdev_io_58779 00:05:25.449 size: 51.011292 MiB name: evtpool_58779 00:05:25.449 size: 50.003479 MiB name: msgpool_58779 00:05:25.449 size: 36.509338 MiB name: fsdev_io_58779 00:05:25.449 size: 21.763794 MiB name: PDU_Pool 00:05:25.449 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:25.449 size: 0.026123 MiB name: Session_Pool 00:05:25.449 end mempools------- 00:05:25.449 6 memzones totaling size 4.142822 MiB 00:05:25.449 size: 1.000366 MiB name: RG_ring_0_58779 00:05:25.449 size: 1.000366 MiB name: RG_ring_1_58779 00:05:25.449 size: 1.000366 MiB name: RG_ring_4_58779 00:05:25.449 size: 1.000366 MiB name: RG_ring_5_58779 00:05:25.449 size: 0.125366 MiB name: RG_ring_2_58779 00:05:25.449 size: 0.015991 MiB name: RG_ring_3_58779 00:05:25.449 end memzones------- 00:05:25.449 01:18:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:25.449 heap id: 0 total size: 866.000000 MiB number of busy elements: 310 number of free elements: 19 00:05:25.449 list of free elements. size: 19.914795 MiB 00:05:25.449 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:25.449 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:25.449 element at address: 0x200009600000 with size: 1.995972 MiB 00:05:25.449 element at address: 0x20000d800000 with size: 1.995972 MiB 00:05:25.449 element at address: 0x200007000000 with size: 1.991028 MiB 00:05:25.449 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:05:25.449 element at address: 0x20001c300040 with size: 0.999939 MiB 00:05:25.449 element at address: 0x20001c400000 with size: 0.999084 MiB 00:05:25.449 element at address: 0x200035000000 with size: 0.994324 MiB 00:05:25.449 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:05:25.449 element at address: 0x20001c700040 with size: 0.936401 MiB 00:05:25.449 element at address: 0x200000200000 with size: 0.831909 MiB 00:05:25.449 element at address: 0x20001de00000 with size: 0.562195 MiB 00:05:25.449 element at address: 0x200003e00000 with size: 0.490662 MiB 00:05:25.449 element at address: 0x20001c000000 with size: 0.489197 MiB 00:05:25.449 element at address: 0x20001c800000 with size: 0.485413 MiB 00:05:25.449 element at address: 0x200015e00000 with size: 0.443481 MiB 00:05:25.449 element at address: 0x20002b200000 with size: 0.390442 MiB 00:05:25.449 element at address: 0x200003a00000 with size: 0.352844 MiB 00:05:25.449 list of standard malloc elements. size: 199.286499 MiB 00:05:25.449 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:05:25.449 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:05:25.449 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:05:25.449 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:05:25.449 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:05:25.449 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:25.449 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:05:25.449 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:25.449 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:05:25.449 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:05:25.449 element at address: 0x200015dff040 with size: 0.000305 MiB 00:05:25.449 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:25.449 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003a7e9c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003aff700 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003e7ecc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015dff180 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015dff280 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015dff380 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015dff480 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015dff580 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015dff680 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015dff780 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015dff880 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015dff980 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015e71880 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015e71980 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015e72080 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015e72180 with size: 0.000244 MiB 00:05:25.449 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:05:25.449 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b264040 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:05:25.450 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:05:25.450 list of memzone associated elements. size: 646.798706 MiB 00:05:25.450 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:05:25.450 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:25.450 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:05:25.450 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:25.450 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:05:25.450 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58779_0 00:05:25.450 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:25.450 associated memzone info: size: 48.002930 MiB name: MP_evtpool_58779_0 00:05:25.450 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:25.450 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58779_0 00:05:25.450 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:05:25.450 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58779_0 00:05:25.450 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:05:25.450 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:25.450 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:05:25.450 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:25.450 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:25.450 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_58779 00:05:25.450 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:25.450 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58779 00:05:25.450 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:25.450 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58779 00:05:25.450 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:05:25.450 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:25.450 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:05:25.450 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:25.450 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:05:25.450 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:25.450 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:05:25.450 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:25.450 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:25.450 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58779 00:05:25.450 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:25.450 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58779 00:05:25.450 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:05:25.450 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58779 00:05:25.450 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:05:25.450 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58779 00:05:25.450 element at address: 0x200003a7f4c0 with size: 0.500549 MiB 00:05:25.450 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58779 00:05:25.450 element at address: 0x200003e7edc0 with size: 0.500549 MiB 00:05:25.450 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58779 00:05:25.450 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:05:25.450 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:25.450 element at address: 0x200015e72280 with size: 0.500549 MiB 00:05:25.450 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:25.450 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:05:25.450 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:25.450 element at address: 0x200003a5e780 with size: 0.125549 MiB 00:05:25.450 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58779 00:05:25.450 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:05:25.450 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:25.450 element at address: 0x20002b264140 with size: 0.023804 MiB 00:05:25.450 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:25.450 element at address: 0x200003a5a540 with size: 0.016174 MiB 00:05:25.450 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58779 00:05:25.450 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:05:25.450 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:25.450 element at address: 0x2000002d6080 with size: 0.000366 MiB 00:05:25.450 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58779 00:05:25.450 element at address: 0x200003aff800 with size: 0.000366 MiB 00:05:25.450 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58779 00:05:25.450 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:05:25.451 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58779 00:05:25.451 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:05:25.451 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:25.451 01:18:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:25.451 01:18:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58779 00:05:25.451 01:18:21 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58779 ']' 00:05:25.451 01:18:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58779 00:05:25.451 01:18:21 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:25.451 01:18:21 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.451 01:18:21 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58779 00:05:25.451 01:18:21 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.451 01:18:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.451 killing process with pid 58779 00:05:25.451 01:18:21 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58779' 00:05:25.451 01:18:21 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58779 00:05:25.451 01:18:21 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58779 00:05:27.357 00:05:27.357 real 0m3.412s 00:05:27.357 user 0m3.557s 00:05:27.357 sys 0m0.467s 00:05:27.357 01:18:23 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.357 01:18:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.357 ************************************ 00:05:27.357 END TEST dpdk_mem_utility 00:05:27.357 ************************************ 00:05:27.357 01:18:23 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:27.357 01:18:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.357 01:18:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.357 01:18:23 -- common/autotest_common.sh@10 -- # set +x 00:05:27.357 ************************************ 00:05:27.357 START TEST event 00:05:27.357 ************************************ 00:05:27.357 01:18:23 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:27.616 * Looking for test storage... 00:05:27.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:27.616 01:18:23 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:27.616 01:18:23 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:27.616 01:18:23 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:27.616 01:18:23 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:27.616 01:18:23 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.616 01:18:23 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.616 01:18:23 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.616 01:18:23 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.616 01:18:23 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.616 01:18:23 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.616 01:18:23 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.616 01:18:23 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.616 01:18:23 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.616 01:18:23 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.616 01:18:23 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.616 01:18:23 event -- scripts/common.sh@344 -- # case "$op" in 00:05:27.616 01:18:23 event -- scripts/common.sh@345 -- # : 1 00:05:27.616 01:18:23 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.616 01:18:23 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.616 01:18:23 event -- scripts/common.sh@365 -- # decimal 1 00:05:27.616 01:18:23 event -- scripts/common.sh@353 -- # local d=1 00:05:27.616 01:18:23 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.616 01:18:23 event -- scripts/common.sh@355 -- # echo 1 00:05:27.616 01:18:23 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.616 01:18:23 event -- scripts/common.sh@366 -- # decimal 2 00:05:27.616 01:18:23 event -- scripts/common.sh@353 -- # local d=2 00:05:27.616 01:18:23 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.616 01:18:23 event -- scripts/common.sh@355 -- # echo 2 00:05:27.616 01:18:23 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.616 01:18:23 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.616 01:18:23 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.616 01:18:23 event -- scripts/common.sh@368 -- # return 0 00:05:27.616 01:18:23 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.616 01:18:23 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:27.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.616 --rc genhtml_branch_coverage=1 00:05:27.616 --rc genhtml_function_coverage=1 00:05:27.616 --rc genhtml_legend=1 00:05:27.616 --rc geninfo_all_blocks=1 00:05:27.616 --rc geninfo_unexecuted_blocks=1 00:05:27.616 00:05:27.616 ' 00:05:27.616 01:18:23 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:27.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.616 --rc genhtml_branch_coverage=1 00:05:27.616 --rc genhtml_function_coverage=1 00:05:27.616 --rc genhtml_legend=1 00:05:27.616 --rc geninfo_all_blocks=1 00:05:27.616 --rc geninfo_unexecuted_blocks=1 00:05:27.616 00:05:27.616 ' 00:05:27.616 01:18:23 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:27.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.616 --rc genhtml_branch_coverage=1 00:05:27.616 --rc genhtml_function_coverage=1 00:05:27.616 --rc genhtml_legend=1 00:05:27.617 --rc geninfo_all_blocks=1 00:05:27.617 --rc geninfo_unexecuted_blocks=1 00:05:27.617 00:05:27.617 ' 00:05:27.617 01:18:23 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:27.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.617 --rc genhtml_branch_coverage=1 00:05:27.617 --rc genhtml_function_coverage=1 00:05:27.617 --rc genhtml_legend=1 00:05:27.617 --rc geninfo_all_blocks=1 00:05:27.617 --rc geninfo_unexecuted_blocks=1 00:05:27.617 00:05:27.617 ' 00:05:27.617 01:18:23 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:27.617 01:18:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:27.617 01:18:23 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:27.617 01:18:23 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:27.617 01:18:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.617 01:18:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.617 ************************************ 00:05:27.617 START TEST event_perf 00:05:27.617 ************************************ 00:05:27.617 01:18:23 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:27.617 Running I/O for 1 seconds...[2024-09-28 01:18:23.492066] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:27.617 [2024-09-28 01:18:23.492227] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58882 ] 00:05:27.876 [2024-09-28 01:18:23.662682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.135 [2024-09-28 01:18:23.818805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.135 [2024-09-28 01:18:23.818962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.135 [2024-09-28 01:18:23.819044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.135 Running I/O for 1 seconds...[2024-09-28 01:18:23.819046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.523 00:05:29.523 lcore 0: 189782 00:05:29.523 lcore 1: 189782 00:05:29.523 lcore 2: 189782 00:05:29.523 lcore 3: 189782 00:05:29.523 done. 00:05:29.523 00:05:29.523 real 0m1.692s 00:05:29.523 user 0m4.451s 00:05:29.523 sys 0m0.116s 00:05:29.523 01:18:25 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.523 01:18:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.523 ************************************ 00:05:29.523 END TEST event_perf 00:05:29.523 ************************************ 00:05:29.523 01:18:25 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:29.523 01:18:25 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:29.523 01:18:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.523 01:18:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.523 ************************************ 00:05:29.523 START TEST event_reactor 00:05:29.523 ************************************ 00:05:29.523 01:18:25 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:29.523 [2024-09-28 01:18:25.238850] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:29.523 [2024-09-28 01:18:25.239003] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58921 ] 00:05:29.523 [2024-09-28 01:18:25.393043] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.788 [2024-09-28 01:18:25.559964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.165 test_start 00:05:31.165 oneshot 00:05:31.165 tick 100 00:05:31.165 tick 100 00:05:31.165 tick 250 00:05:31.165 tick 100 00:05:31.165 tick 100 00:05:31.165 tick 100 00:05:31.165 tick 250 00:05:31.165 tick 500 00:05:31.165 tick 100 00:05:31.165 tick 100 00:05:31.165 tick 250 00:05:31.165 tick 100 00:05:31.165 tick 100 00:05:31.165 test_end 00:05:31.165 00:05:31.165 real 0m1.698s 00:05:31.165 user 0m1.514s 00:05:31.165 sys 0m0.074s 00:05:31.165 01:18:26 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.165 01:18:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:31.165 ************************************ 00:05:31.165 END TEST event_reactor 00:05:31.165 ************************************ 00:05:31.165 01:18:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.165 01:18:26 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:31.165 01:18:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.165 01:18:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.165 ************************************ 00:05:31.165 START TEST event_reactor_perf 00:05:31.165 ************************************ 00:05:31.165 01:18:26 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.165 [2024-09-28 01:18:27.004461] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:31.165 [2024-09-28 01:18:27.004676] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58963 ] 00:05:31.424 [2024-09-28 01:18:27.177686] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.424 [2024-09-28 01:18:27.339437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.800 test_start 00:05:32.800 test_end 00:05:32.800 Performance: 327840 events per second 00:05:32.800 00:05:32.800 real 0m1.715s 00:05:32.800 user 0m1.511s 00:05:32.800 sys 0m0.094s 00:05:32.800 01:18:28 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.800 01:18:28 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.800 ************************************ 00:05:32.800 END TEST event_reactor_perf 00:05:32.800 ************************************ 00:05:32.800 01:18:28 event -- event/event.sh@49 -- # uname -s 00:05:32.800 01:18:28 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:32.800 01:18:28 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:32.800 01:18:28 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.800 01:18:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.800 01:18:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.060 ************************************ 00:05:33.060 START TEST event_scheduler 00:05:33.060 ************************************ 00:05:33.060 01:18:28 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:33.060 * Looking for test storage... 00:05:33.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:33.060 01:18:28 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:33.060 01:18:28 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:33.060 01:18:28 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:33.060 01:18:28 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.060 01:18:28 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:33.060 01:18:28 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.060 01:18:28 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:33.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.060 --rc genhtml_branch_coverage=1 00:05:33.060 --rc genhtml_function_coverage=1 00:05:33.060 --rc genhtml_legend=1 00:05:33.060 --rc geninfo_all_blocks=1 00:05:33.060 --rc geninfo_unexecuted_blocks=1 00:05:33.060 00:05:33.060 ' 00:05:33.060 01:18:28 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:33.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.060 --rc genhtml_branch_coverage=1 00:05:33.060 --rc genhtml_function_coverage=1 00:05:33.060 --rc genhtml_legend=1 00:05:33.060 --rc geninfo_all_blocks=1 00:05:33.060 --rc geninfo_unexecuted_blocks=1 00:05:33.060 00:05:33.060 ' 00:05:33.060 01:18:28 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:33.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.060 --rc genhtml_branch_coverage=1 00:05:33.060 --rc genhtml_function_coverage=1 00:05:33.060 --rc genhtml_legend=1 00:05:33.060 --rc geninfo_all_blocks=1 00:05:33.060 --rc geninfo_unexecuted_blocks=1 00:05:33.060 00:05:33.060 ' 00:05:33.060 01:18:28 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:33.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.060 --rc genhtml_branch_coverage=1 00:05:33.060 --rc genhtml_function_coverage=1 00:05:33.060 --rc genhtml_legend=1 00:05:33.060 --rc geninfo_all_blocks=1 00:05:33.060 --rc geninfo_unexecuted_blocks=1 00:05:33.060 00:05:33.060 ' 00:05:33.060 01:18:28 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:33.060 01:18:28 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59039 00:05:33.060 01:18:28 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:33.060 01:18:28 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.060 01:18:28 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59039 00:05:33.060 01:18:28 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 59039 ']' 00:05:33.060 01:18:28 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.060 01:18:28 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.060 01:18:28 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.060 01:18:28 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.060 01:18:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.319 [2024-09-28 01:18:29.040101] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:33.319 [2024-09-28 01:18:29.040310] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59039 ] 00:05:33.319 [2024-09-28 01:18:29.216019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.578 [2024-09-28 01:18:29.453408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.578 [2024-09-28 01:18:29.453593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.578 [2024-09-28 01:18:29.453745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.578 [2024-09-28 01:18:29.454178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.147 01:18:29 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.147 01:18:29 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:34.147 01:18:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:34.147 01:18:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.147 01:18:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.147 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.147 POWER: Cannot set governor of lcore 0 to userspace 00:05:34.147 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.147 POWER: Cannot set governor of lcore 0 to performance 00:05:34.147 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.147 POWER: Cannot set governor of lcore 0 to userspace 00:05:34.147 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.147 POWER: Cannot set governor of lcore 0 to userspace 00:05:34.147 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:34.147 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:34.147 POWER: Unable to set Power Management Environment for lcore 0 00:05:34.147 [2024-09-28 01:18:30.008637] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:34.147 [2024-09-28 01:18:30.008665] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:34.147 [2024-09-28 01:18:30.008679] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:34.147 [2024-09-28 01:18:30.009006] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:34.147 [2024-09-28 01:18:30.009037] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:34.147 [2024-09-28 01:18:30.009053] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:34.147 01:18:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.147 01:18:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:34.147 01:18:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.147 01:18:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.406 [2024-09-28 01:18:30.168244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.406 [2024-09-28 01:18:30.252208] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:34.406 01:18:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.406 01:18:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:34.406 01:18:30 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.406 01:18:30 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.406 01:18:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.406 ************************************ 00:05:34.406 START TEST scheduler_create_thread 00:05:34.406 ************************************ 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.406 2 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.406 3 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.406 4 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.406 5 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:34.406 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.407 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.407 6 00:05:34.407 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.407 01:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:34.407 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.407 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.666 7 00:05:34.666 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.666 01:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:34.666 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.666 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.666 8 00:05:34.666 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.666 01:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:34.666 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.666 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.666 9 00:05:34.666 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.666 01:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:34.666 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.666 01:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.045 10 00:05:36.045 01:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.045 01:18:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:36.045 01:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.045 01:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.612 01:18:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.612 01:18:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:36.612 01:18:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:36.612 01:18:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.612 01:18:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.548 01:18:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.548 01:18:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:37.548 01:18:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.548 01:18:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.116 01:18:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.116 01:18:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:38.116 01:18:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:38.116 01:18:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.116 01:18:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.684 ************************************ 00:05:38.684 END TEST scheduler_create_thread 00:05:38.684 ************************************ 00:05:38.684 01:18:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.684 00:05:38.684 real 0m4.212s 00:05:38.684 user 0m0.020s 00:05:38.684 sys 0m0.009s 00:05:38.684 01:18:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.684 01:18:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.684 01:18:34 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:38.684 01:18:34 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59039 00:05:38.684 01:18:34 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 59039 ']' 00:05:38.684 01:18:34 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 59039 00:05:38.684 01:18:34 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:38.684 01:18:34 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.684 01:18:34 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59039 00:05:38.684 killing process with pid 59039 00:05:38.684 01:18:34 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:38.684 01:18:34 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:38.684 01:18:34 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59039' 00:05:38.684 01:18:34 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 59039 00:05:38.684 01:18:34 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 59039 00:05:38.943 [2024-09-28 01:18:34.757024] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:40.322 00:05:40.322 real 0m7.198s 00:05:40.322 user 0m16.168s 00:05:40.322 sys 0m0.485s 00:05:40.322 01:18:35 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.322 01:18:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.322 ************************************ 00:05:40.322 END TEST event_scheduler 00:05:40.322 ************************************ 00:05:40.322 01:18:35 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:40.322 01:18:35 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:40.322 01:18:35 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.322 01:18:35 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.322 01:18:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.322 ************************************ 00:05:40.322 START TEST app_repeat 00:05:40.322 ************************************ 00:05:40.322 01:18:35 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:40.322 01:18:35 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.322 01:18:35 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.322 01:18:35 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:40.322 01:18:35 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.322 01:18:35 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:40.322 01:18:35 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:40.322 01:18:35 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:40.322 01:18:35 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59162 00:05:40.322 01:18:35 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.323 01:18:35 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:40.323 Process app_repeat pid: 59162 00:05:40.323 01:18:35 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59162' 00:05:40.323 01:18:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.323 01:18:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:40.323 spdk_app_start Round 0 00:05:40.323 01:18:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59162 /var/tmp/spdk-nbd.sock 00:05:40.323 01:18:35 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59162 ']' 00:05:40.323 01:18:35 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.323 01:18:35 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.323 01:18:35 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.323 01:18:35 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.323 01:18:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.323 [2024-09-28 01:18:36.049039] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:40.323 [2024-09-28 01:18:36.049187] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59162 ] 00:05:40.323 [2024-09-28 01:18:36.208632] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.582 [2024-09-28 01:18:36.387829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.582 [2024-09-28 01:18:36.387846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.841 [2024-09-28 01:18:36.549788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.410 01:18:37 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.410 01:18:37 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:41.410 01:18:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.668 Malloc0 00:05:41.668 01:18:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.927 Malloc1 00:05:41.927 01:18:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.927 01:18:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.927 01:18:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.927 01:18:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.927 01:18:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.927 01:18:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.927 01:18:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.927 01:18:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.927 01:18:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.927 01:18:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.927 01:18:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.927 01:18:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.927 01:18:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:41.927 01:18:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.928 01:18:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.928 01:18:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:42.187 /dev/nbd0 00:05:42.187 01:18:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.187 01:18:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.187 01:18:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:42.187 01:18:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:42.187 01:18:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:42.187 01:18:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:42.187 01:18:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:42.187 01:18:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:42.187 01:18:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:42.187 01:18:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:42.187 01:18:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.187 1+0 records in 00:05:42.187 1+0 records out 00:05:42.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372259 s, 11.0 MB/s 00:05:42.187 01:18:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.187 01:18:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:42.187 01:18:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.187 01:18:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:42.187 01:18:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:42.187 01:18:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.187 01:18:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.187 01:18:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:42.446 /dev/nbd1 00:05:42.446 01:18:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.446 01:18:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.446 01:18:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:42.446 01:18:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:42.446 01:18:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:42.446 01:18:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:42.446 01:18:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:42.446 01:18:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:42.446 01:18:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:42.446 01:18:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:42.446 01:18:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.446 1+0 records in 00:05:42.446 1+0 records out 00:05:42.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373869 s, 11.0 MB/s 00:05:42.446 01:18:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.446 01:18:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:42.446 01:18:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.446 01:18:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:42.446 01:18:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:42.446 01:18:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.446 01:18:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.446 01:18:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.446 01:18:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.446 01:18:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.705 01:18:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.705 { 00:05:42.705 "nbd_device": "/dev/nbd0", 00:05:42.705 "bdev_name": "Malloc0" 00:05:42.705 }, 00:05:42.705 { 00:05:42.705 "nbd_device": "/dev/nbd1", 00:05:42.705 "bdev_name": "Malloc1" 00:05:42.705 } 00:05:42.705 ]' 00:05:42.705 01:18:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.705 { 00:05:42.705 "nbd_device": "/dev/nbd0", 00:05:42.705 "bdev_name": "Malloc0" 00:05:42.705 }, 00:05:42.705 { 00:05:42.705 "nbd_device": "/dev/nbd1", 00:05:42.705 "bdev_name": "Malloc1" 00:05:42.705 } 00:05:42.705 ]' 00:05:42.705 01:18:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.964 /dev/nbd1' 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.964 /dev/nbd1' 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.964 256+0 records in 00:05:42.964 256+0 records out 00:05:42.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00820264 s, 128 MB/s 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.964 256+0 records in 00:05:42.964 256+0 records out 00:05:42.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311367 s, 33.7 MB/s 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.964 256+0 records in 00:05:42.964 256+0 records out 00:05:42.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301678 s, 34.8 MB/s 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.964 01:18:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.223 01:18:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.223 01:18:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.223 01:18:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.223 01:18:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.223 01:18:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.223 01:18:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.223 01:18:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.223 01:18:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.223 01:18:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.223 01:18:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.481 01:18:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.481 01:18:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.481 01:18:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.481 01:18:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.481 01:18:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.481 01:18:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.481 01:18:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.481 01:18:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.481 01:18:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.481 01:18:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.481 01:18:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.739 01:18:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.740 01:18:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.740 01:18:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.999 01:18:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.999 01:18:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.999 01:18:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.999 01:18:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:43.999 01:18:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.999 01:18:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.999 01:18:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.999 01:18:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.999 01:18:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.999 01:18:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.258 01:18:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.636 [2024-09-28 01:18:41.258061] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.636 [2024-09-28 01:18:41.414348] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.636 [2024-09-28 01:18:41.414354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.895 [2024-09-28 01:18:41.571262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.895 [2024-09-28 01:18:41.571655] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.895 [2024-09-28 01:18:41.571695] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.273 spdk_app_start Round 1 00:05:47.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.273 01:18:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.273 01:18:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:47.273 01:18:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59162 /var/tmp/spdk-nbd.sock 00:05:47.273 01:18:43 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59162 ']' 00:05:47.273 01:18:43 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.273 01:18:43 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.273 01:18:43 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.273 01:18:43 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.273 01:18:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.532 01:18:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.532 01:18:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:47.532 01:18:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.099 Malloc0 00:05:48.099 01:18:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.357 Malloc1 00:05:48.357 01:18:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.357 01:18:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.357 01:18:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.357 01:18:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.357 01:18:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.357 01:18:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.357 01:18:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.357 01:18:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.357 01:18:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.357 01:18:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.357 01:18:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.357 01:18:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.357 01:18:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.357 01:18:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.357 01:18:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.357 01:18:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.616 /dev/nbd0 00:05:48.616 01:18:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.616 01:18:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.616 01:18:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:48.616 01:18:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:48.616 01:18:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:48.616 01:18:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:48.616 01:18:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:48.616 01:18:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:48.616 01:18:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:48.616 01:18:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:48.616 01:18:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.616 1+0 records in 00:05:48.616 1+0 records out 00:05:48.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239073 s, 17.1 MB/s 00:05:48.616 01:18:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.616 01:18:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:48.616 01:18:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.616 01:18:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:48.616 01:18:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:48.616 01:18:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.616 01:18:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.616 01:18:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.875 /dev/nbd1 00:05:48.875 01:18:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.875 01:18:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.875 01:18:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:48.875 01:18:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:48.875 01:18:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:48.875 01:18:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:48.875 01:18:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:48.875 01:18:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:48.875 01:18:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:48.875 01:18:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:48.875 01:18:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.875 1+0 records in 00:05:48.875 1+0 records out 00:05:48.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253077 s, 16.2 MB/s 00:05:48.875 01:18:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.875 01:18:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:48.875 01:18:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.875 01:18:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:48.875 01:18:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:48.875 01:18:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.875 01:18:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.875 01:18:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.875 01:18:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.875 01:18:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.134 01:18:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.134 { 00:05:49.134 "nbd_device": "/dev/nbd0", 00:05:49.134 "bdev_name": "Malloc0" 00:05:49.134 }, 00:05:49.134 { 00:05:49.134 "nbd_device": "/dev/nbd1", 00:05:49.134 "bdev_name": "Malloc1" 00:05:49.134 } 00:05:49.134 ]' 00:05:49.134 01:18:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.134 { 00:05:49.134 "nbd_device": "/dev/nbd0", 00:05:49.134 "bdev_name": "Malloc0" 00:05:49.134 }, 00:05:49.134 { 00:05:49.134 "nbd_device": "/dev/nbd1", 00:05:49.134 "bdev_name": "Malloc1" 00:05:49.134 } 00:05:49.134 ]' 00:05:49.134 01:18:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.134 01:18:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.134 /dev/nbd1' 00:05:49.134 01:18:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.134 01:18:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.134 /dev/nbd1' 00:05:49.134 01:18:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.134 01:18:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.134 01:18:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.134 01:18:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.134 01:18:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.134 01:18:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.134 01:18:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.134 01:18:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.135 01:18:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.135 01:18:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.135 01:18:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.393 256+0 records in 00:05:49.393 256+0 records out 00:05:49.393 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100014 s, 105 MB/s 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.394 256+0 records in 00:05:49.394 256+0 records out 00:05:49.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249381 s, 42.0 MB/s 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.394 256+0 records in 00:05:49.394 256+0 records out 00:05:49.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266362 s, 39.4 MB/s 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.394 01:18:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.653 01:18:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.653 01:18:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.653 01:18:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.653 01:18:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.653 01:18:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.653 01:18:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.653 01:18:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.653 01:18:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.653 01:18:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.653 01:18:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.916 01:18:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.916 01:18:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.916 01:18:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.916 01:18:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.916 01:18:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.916 01:18:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.916 01:18:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.916 01:18:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.916 01:18:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.916 01:18:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.916 01:18:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.197 01:18:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.197 01:18:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.197 01:18:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.197 01:18:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.197 01:18:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.197 01:18:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.197 01:18:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.197 01:18:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.197 01:18:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.197 01:18:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.197 01:18:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.197 01:18:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.197 01:18:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.776 01:18:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.713 [2024-09-28 01:18:47.498224] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.973 [2024-09-28 01:18:47.648207] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.973 [2024-09-28 01:18:47.648209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.973 [2024-09-28 01:18:47.800117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.973 [2024-09-28 01:18:47.800262] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.973 [2024-09-28 01:18:47.800282] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.877 01:18:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.877 spdk_app_start Round 2 00:05:53.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.877 01:18:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:53.877 01:18:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59162 /var/tmp/spdk-nbd.sock 00:05:53.877 01:18:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59162 ']' 00:05:53.877 01:18:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.877 01:18:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.877 01:18:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.877 01:18:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.877 01:18:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.877 01:18:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.877 01:18:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:53.877 01:18:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.136 Malloc0 00:05:54.395 01:18:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.653 Malloc1 00:05:54.653 01:18:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.653 01:18:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.653 01:18:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.653 01:18:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.653 01:18:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.653 01:18:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.653 01:18:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.653 01:18:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.654 01:18:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.654 01:18:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.654 01:18:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.654 01:18:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.654 01:18:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.654 01:18:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.654 01:18:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.654 01:18:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.912 /dev/nbd0 00:05:54.912 01:18:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.913 01:18:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.913 01:18:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:54.913 01:18:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:54.913 01:18:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:54.913 01:18:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:54.913 01:18:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:54.913 01:18:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:54.913 01:18:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:54.913 01:18:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:54.913 01:18:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.913 1+0 records in 00:05:54.913 1+0 records out 00:05:54.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301528 s, 13.6 MB/s 00:05:54.913 01:18:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.913 01:18:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:54.913 01:18:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.913 01:18:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:54.913 01:18:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:54.913 01:18:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.913 01:18:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.913 01:18:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.172 /dev/nbd1 00:05:55.172 01:18:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.172 01:18:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.172 01:18:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:55.172 01:18:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:55.172 01:18:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:55.172 01:18:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:55.172 01:18:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:55.172 01:18:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:55.172 01:18:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:55.172 01:18:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:55.172 01:18:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.172 1+0 records in 00:05:55.172 1+0 records out 00:05:55.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027236 s, 15.0 MB/s 00:05:55.172 01:18:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.172 01:18:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:55.172 01:18:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.172 01:18:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:55.172 01:18:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:55.172 01:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.172 01:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.172 01:18:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.172 01:18:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.172 01:18:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.431 01:18:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.431 { 00:05:55.431 "nbd_device": "/dev/nbd0", 00:05:55.431 "bdev_name": "Malloc0" 00:05:55.431 }, 00:05:55.431 { 00:05:55.431 "nbd_device": "/dev/nbd1", 00:05:55.431 "bdev_name": "Malloc1" 00:05:55.431 } 00:05:55.431 ]' 00:05:55.431 01:18:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.431 { 00:05:55.431 "nbd_device": "/dev/nbd0", 00:05:55.431 "bdev_name": "Malloc0" 00:05:55.431 }, 00:05:55.431 { 00:05:55.431 "nbd_device": "/dev/nbd1", 00:05:55.431 "bdev_name": "Malloc1" 00:05:55.431 } 00:05:55.431 ]' 00:05:55.431 01:18:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.690 /dev/nbd1' 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.690 /dev/nbd1' 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.690 256+0 records in 00:05:55.690 256+0 records out 00:05:55.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106036 s, 98.9 MB/s 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.690 256+0 records in 00:05:55.690 256+0 records out 00:05:55.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243363 s, 43.1 MB/s 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.690 256+0 records in 00:05:55.690 256+0 records out 00:05:55.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0326531 s, 32.1 MB/s 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.690 01:18:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.949 01:18:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.949 01:18:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.949 01:18:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.949 01:18:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.949 01:18:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.949 01:18:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.949 01:18:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.949 01:18:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.949 01:18:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.949 01:18:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.207 01:18:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.207 01:18:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.207 01:18:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.207 01:18:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.207 01:18:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.207 01:18:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.207 01:18:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.207 01:18:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.207 01:18:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.207 01:18:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.207 01:18:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.465 01:18:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.465 01:18:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.465 01:18:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.724 01:18:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.724 01:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.724 01:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.724 01:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.724 01:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.724 01:18:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.724 01:18:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.724 01:18:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.724 01:18:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.724 01:18:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.983 01:18:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.921 [2024-09-28 01:18:53.817279] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.180 [2024-09-28 01:18:53.974290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.180 [2024-09-28 01:18:53.974294] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.440 [2024-09-28 01:18:54.118070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.440 [2024-09-28 01:18:54.118201] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:58.440 [2024-09-28 01:18:54.118226] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.346 01:18:55 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59162 /var/tmp/spdk-nbd.sock 00:06:00.346 01:18:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59162 ']' 00:06:00.346 01:18:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.346 01:18:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.346 01:18:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.346 01:18:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.346 01:18:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.346 01:18:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.346 01:18:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:00.346 01:18:56 event.app_repeat -- event/event.sh@39 -- # killprocess 59162 00:06:00.346 01:18:56 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 59162 ']' 00:06:00.346 01:18:56 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 59162 00:06:00.346 01:18:56 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:00.346 01:18:56 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.346 01:18:56 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59162 00:06:00.346 killing process with pid 59162 00:06:00.346 01:18:56 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.346 01:18:56 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.346 01:18:56 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59162' 00:06:00.346 01:18:56 event.app_repeat -- common/autotest_common.sh@969 -- # kill 59162 00:06:00.346 01:18:56 event.app_repeat -- common/autotest_common.sh@974 -- # wait 59162 00:06:01.283 spdk_app_start is called in Round 0. 00:06:01.283 Shutdown signal received, stop current app iteration 00:06:01.283 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:06:01.283 spdk_app_start is called in Round 1. 00:06:01.283 Shutdown signal received, stop current app iteration 00:06:01.283 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:06:01.283 spdk_app_start is called in Round 2. 00:06:01.283 Shutdown signal received, stop current app iteration 00:06:01.283 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:06:01.284 spdk_app_start is called in Round 3. 00:06:01.284 Shutdown signal received, stop current app iteration 00:06:01.284 ************************************ 00:06:01.284 END TEST app_repeat 00:06:01.284 ************************************ 00:06:01.284 01:18:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:01.284 01:18:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:01.284 00:06:01.284 real 0m21.069s 00:06:01.284 user 0m46.305s 00:06:01.284 sys 0m2.682s 00:06:01.284 01:18:57 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.284 01:18:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.284 01:18:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:01.284 01:18:57 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:01.284 01:18:57 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.284 01:18:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.284 01:18:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.284 ************************************ 00:06:01.284 START TEST cpu_locks 00:06:01.284 ************************************ 00:06:01.284 01:18:57 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:01.284 * Looking for test storage... 00:06:01.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:01.284 01:18:57 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:01.284 01:18:57 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:01.284 01:18:57 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:01.543 01:18:57 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.543 01:18:57 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:01.543 01:18:57 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.543 01:18:57 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:01.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.543 --rc genhtml_branch_coverage=1 00:06:01.543 --rc genhtml_function_coverage=1 00:06:01.543 --rc genhtml_legend=1 00:06:01.543 --rc geninfo_all_blocks=1 00:06:01.543 --rc geninfo_unexecuted_blocks=1 00:06:01.543 00:06:01.543 ' 00:06:01.543 01:18:57 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:01.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.543 --rc genhtml_branch_coverage=1 00:06:01.543 --rc genhtml_function_coverage=1 00:06:01.543 --rc genhtml_legend=1 00:06:01.543 --rc geninfo_all_blocks=1 00:06:01.543 --rc geninfo_unexecuted_blocks=1 00:06:01.543 00:06:01.543 ' 00:06:01.543 01:18:57 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:01.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.543 --rc genhtml_branch_coverage=1 00:06:01.543 --rc genhtml_function_coverage=1 00:06:01.543 --rc genhtml_legend=1 00:06:01.543 --rc geninfo_all_blocks=1 00:06:01.543 --rc geninfo_unexecuted_blocks=1 00:06:01.543 00:06:01.543 ' 00:06:01.543 01:18:57 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:01.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.543 --rc genhtml_branch_coverage=1 00:06:01.543 --rc genhtml_function_coverage=1 00:06:01.543 --rc genhtml_legend=1 00:06:01.543 --rc geninfo_all_blocks=1 00:06:01.543 --rc geninfo_unexecuted_blocks=1 00:06:01.543 00:06:01.543 ' 00:06:01.543 01:18:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:01.543 01:18:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:01.543 01:18:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:01.543 01:18:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:01.543 01:18:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.543 01:18:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.543 01:18:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.543 ************************************ 00:06:01.543 START TEST default_locks 00:06:01.543 ************************************ 00:06:01.543 01:18:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:01.543 01:18:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59632 00:06:01.543 01:18:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59632 00:06:01.543 01:18:57 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59632 ']' 00:06:01.543 01:18:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.543 01:18:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.543 01:18:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.543 01:18:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.543 01:18:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.543 01:18:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.543 [2024-09-28 01:18:57.411204] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:01.544 [2024-09-28 01:18:57.411349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59632 ] 00:06:01.803 [2024-09-28 01:18:57.565144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.803 [2024-09-28 01:18:57.717914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.062 [2024-09-28 01:18:57.895970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.631 01:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.631 01:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:02.631 01:18:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59632 00:06:02.631 01:18:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59632 00:06:02.631 01:18:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.200 01:18:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59632 00:06:03.200 01:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 59632 ']' 00:06:03.200 01:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 59632 00:06:03.200 01:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:03.200 01:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.200 01:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59632 00:06:03.200 01:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.200 01:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.200 killing process with pid 59632 00:06:03.200 01:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59632' 00:06:03.200 01:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 59632 00:06:03.200 01:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 59632 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59632 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59632 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59632 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59632 ']' 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.103 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59632) - No such process 00:06:05.103 ERROR: process (pid: 59632) is no longer running 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:05.103 00:06:05.103 real 0m3.459s 00:06:05.103 user 0m3.661s 00:06:05.103 sys 0m0.582s 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.103 01:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.103 ************************************ 00:06:05.103 END TEST default_locks 00:06:05.103 ************************************ 00:06:05.103 01:19:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:05.103 01:19:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.103 01:19:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.103 01:19:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.103 ************************************ 00:06:05.103 START TEST default_locks_via_rpc 00:06:05.103 ************************************ 00:06:05.103 01:19:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:05.103 01:19:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59696 00:06:05.103 01:19:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59696 00:06:05.103 01:19:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.103 01:19:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59696 ']' 00:06:05.103 01:19:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.103 01:19:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.103 01:19:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.103 01:19:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.103 01:19:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.103 [2024-09-28 01:19:00.952126] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:05.103 [2024-09-28 01:19:00.952292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59696 ] 00:06:05.362 [2024-09-28 01:19:01.120838] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.362 [2024-09-28 01:19:01.277853] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.621 [2024-09-28 01:19:01.458962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.189 01:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.189 01:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:06.189 01:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:06.189 01:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.189 01:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.189 01:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.189 01:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:06.189 01:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.189 01:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.189 01:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.189 01:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.189 01:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.189 01:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.189 01:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.189 01:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59696 00:06:06.189 01:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59696 00:06:06.189 01:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.449 01:19:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59696 00:06:06.449 01:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 59696 ']' 00:06:06.449 01:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 59696 00:06:06.449 01:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:06.449 01:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.449 01:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59696 00:06:06.449 01:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.449 01:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.449 killing process with pid 59696 00:06:06.449 01:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59696' 00:06:06.449 01:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 59696 00:06:06.449 01:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 59696 00:06:08.354 00:06:08.354 real 0m3.371s 00:06:08.354 user 0m3.434s 00:06:08.354 sys 0m0.564s 00:06:08.354 01:19:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.354 01:19:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.354 ************************************ 00:06:08.354 END TEST default_locks_via_rpc 00:06:08.354 ************************************ 00:06:08.354 01:19:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:08.354 01:19:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.354 01:19:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.354 01:19:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.354 ************************************ 00:06:08.354 START TEST non_locking_app_on_locked_coremask 00:06:08.354 ************************************ 00:06:08.354 01:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:08.354 01:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59763 00:06:08.354 01:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.354 01:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59763 /var/tmp/spdk.sock 00:06:08.354 01:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59763 ']' 00:06:08.354 01:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.354 01:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.355 01:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.355 01:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.355 01:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.614 [2024-09-28 01:19:04.350291] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:08.614 [2024-09-28 01:19:04.350429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59763 ] 00:06:08.614 [2024-09-28 01:19:04.501117] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.873 [2024-09-28 01:19:04.654265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.132 [2024-09-28 01:19:04.846585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.391 01:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.391 01:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:09.391 01:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59788 00:06:09.391 01:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59788 /var/tmp/spdk2.sock 00:06:09.391 01:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:09.391 01:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59788 ']' 00:06:09.391 01:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.391 01:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.391 01:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.391 01:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.391 01:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.673 [2024-09-28 01:19:05.438726] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:09.673 [2024-09-28 01:19:05.438888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59788 ] 00:06:09.961 [2024-09-28 01:19:05.617296] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.961 [2024-09-28 01:19:05.617357] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.232 [2024-09-28 01:19:05.940091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.491 [2024-09-28 01:19:06.348369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.427 01:19:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.427 01:19:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:11.427 01:19:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59763 00:06:11.427 01:19:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59763 00:06:11.427 01:19:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.363 01:19:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59763 00:06:12.363 01:19:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59763 ']' 00:06:12.363 01:19:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59763 00:06:12.363 01:19:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:12.363 01:19:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.363 01:19:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59763 00:06:12.363 01:19:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.363 killing process with pid 59763 00:06:12.363 01:19:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.363 01:19:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59763' 00:06:12.363 01:19:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59763 00:06:12.363 01:19:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59763 00:06:16.550 01:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59788 00:06:16.550 01:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59788 ']' 00:06:16.551 01:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59788 00:06:16.551 01:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:16.551 01:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.551 01:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59788 00:06:16.551 01:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.551 01:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.551 killing process with pid 59788 00:06:16.551 01:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59788' 00:06:16.551 01:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59788 00:06:16.551 01:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59788 00:06:17.927 00:06:17.927 real 0m9.605s 00:06:17.927 user 0m10.093s 00:06:17.927 sys 0m1.237s 00:06:17.927 01:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.927 01:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.927 ************************************ 00:06:17.927 END TEST non_locking_app_on_locked_coremask 00:06:17.927 ************************************ 00:06:18.186 01:19:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:18.186 01:19:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.186 01:19:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.186 01:19:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.186 ************************************ 00:06:18.186 START TEST locking_app_on_unlocked_coremask 00:06:18.186 ************************************ 00:06:18.186 01:19:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:18.186 01:19:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59914 00:06:18.186 01:19:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59914 /var/tmp/spdk.sock 00:06:18.186 01:19:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:18.186 01:19:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59914 ']' 00:06:18.186 01:19:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.186 01:19:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.186 01:19:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.186 01:19:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.186 01:19:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.186 [2024-09-28 01:19:14.036028] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:18.186 [2024-09-28 01:19:14.036204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59914 ] 00:06:18.445 [2024-09-28 01:19:14.203303] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.445 [2024-09-28 01:19:14.203361] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.445 [2024-09-28 01:19:14.356374] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.703 [2024-09-28 01:19:14.552580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.271 01:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.271 01:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:19.271 01:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59930 00:06:19.271 01:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59930 /var/tmp/spdk2.sock 00:06:19.271 01:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.271 01:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59930 ']' 00:06:19.271 01:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.271 01:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.271 01:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.271 01:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.271 01:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.271 [2024-09-28 01:19:15.134731] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:19.271 [2024-09-28 01:19:15.134909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59930 ] 00:06:19.530 [2024-09-28 01:19:15.304690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.802 [2024-09-28 01:19:15.635142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.372 [2024-09-28 01:19:16.018019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.308 01:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.308 01:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:21.308 01:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59930 00:06:21.308 01:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59930 00:06:21.308 01:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.245 01:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59914 00:06:22.245 01:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59914 ']' 00:06:22.245 01:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59914 00:06:22.245 01:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:22.245 01:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.245 01:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59914 00:06:22.245 01:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.245 killing process with pid 59914 00:06:22.245 01:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.245 01:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59914' 00:06:22.245 01:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59914 00:06:22.245 01:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59914 00:06:26.437 01:19:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59930 00:06:26.438 01:19:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59930 ']' 00:06:26.438 01:19:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59930 00:06:26.438 01:19:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:26.438 01:19:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.438 01:19:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59930 00:06:26.438 01:19:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.438 01:19:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.438 killing process with pid 59930 00:06:26.438 01:19:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59930' 00:06:26.438 01:19:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59930 00:06:26.438 01:19:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59930 00:06:28.357 00:06:28.357 real 0m9.879s 00:06:28.357 user 0m10.396s 00:06:28.357 sys 0m1.242s 00:06:28.357 01:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.357 01:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.357 ************************************ 00:06:28.357 END TEST locking_app_on_unlocked_coremask 00:06:28.357 ************************************ 00:06:28.357 01:19:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:28.357 01:19:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.357 01:19:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.357 01:19:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.357 ************************************ 00:06:28.357 START TEST locking_app_on_locked_coremask 00:06:28.357 ************************************ 00:06:28.357 01:19:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:28.357 01:19:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60061 00:06:28.357 01:19:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60061 /var/tmp/spdk.sock 00:06:28.357 01:19:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.357 01:19:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60061 ']' 00:06:28.357 01:19:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.357 01:19:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.357 01:19:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.357 01:19:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.357 01:19:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.357 [2024-09-28 01:19:23.974319] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:28.357 [2024-09-28 01:19:23.974547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60061 ] 00:06:28.357 [2024-09-28 01:19:24.146749] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.616 [2024-09-28 01:19:24.324676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.616 [2024-09-28 01:19:24.515546] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.184 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.184 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:29.184 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.184 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60077 00:06:29.184 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60077 /var/tmp/spdk2.sock 00:06:29.184 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:29.184 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60077 /var/tmp/spdk2.sock 00:06:29.184 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:29.184 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.184 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:29.184 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.184 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60077 /var/tmp/spdk2.sock 00:06:29.185 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60077 ']' 00:06:29.185 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.185 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.185 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.185 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.185 01:19:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.185 [2024-09-28 01:19:25.088498] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:29.185 [2024-09-28 01:19:25.088632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60077 ] 00:06:29.444 [2024-09-28 01:19:25.252142] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60061 has claimed it. 00:06:29.444 [2024-09-28 01:19:25.252221] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:30.012 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60077) - No such process 00:06:30.012 ERROR: process (pid: 60077) is no longer running 00:06:30.012 01:19:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.012 01:19:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:30.012 01:19:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:30.012 01:19:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.012 01:19:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:30.012 01:19:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.012 01:19:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60061 00:06:30.012 01:19:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60061 00:06:30.012 01:19:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.629 01:19:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60061 00:06:30.629 01:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60061 ']' 00:06:30.629 01:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60061 00:06:30.629 01:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:30.629 01:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.629 01:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60061 00:06:30.629 01:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.629 killing process with pid 60061 00:06:30.629 01:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.630 01:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60061' 00:06:30.630 01:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60061 00:06:30.630 01:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60061 00:06:32.542 00:06:32.542 real 0m4.490s 00:06:32.542 user 0m4.942s 00:06:32.542 sys 0m0.735s 00:06:32.542 01:19:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.542 ************************************ 00:06:32.542 END TEST locking_app_on_locked_coremask 00:06:32.542 01:19:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.542 ************************************ 00:06:32.542 01:19:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:32.542 01:19:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.542 01:19:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.542 01:19:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.542 ************************************ 00:06:32.542 START TEST locking_overlapped_coremask 00:06:32.542 ************************************ 00:06:32.542 01:19:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:32.542 01:19:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60146 00:06:32.542 01:19:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60146 /var/tmp/spdk.sock 00:06:32.542 01:19:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60146 ']' 00:06:32.542 01:19:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.542 01:19:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.542 01:19:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:32.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.542 01:19:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.542 01:19:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.542 01:19:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.801 [2024-09-28 01:19:28.504500] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:32.801 [2024-09-28 01:19:28.504670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60146 ] 00:06:32.801 [2024-09-28 01:19:28.670793] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.060 [2024-09-28 01:19:28.830149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.060 [2024-09-28 01:19:28.830277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.060 [2024-09-28 01:19:28.830291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.319 [2024-09-28 01:19:29.027568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.887 01:19:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.887 01:19:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:33.887 01:19:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60164 00:06:33.887 01:19:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60164 /var/tmp/spdk2.sock 00:06:33.887 01:19:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:33.887 01:19:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:33.887 01:19:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60164 /var/tmp/spdk2.sock 00:06:33.887 01:19:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:33.887 01:19:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.887 01:19:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:33.887 01:19:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.887 01:19:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60164 /var/tmp/spdk2.sock 00:06:33.887 01:19:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60164 ']' 00:06:33.887 01:19:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.887 01:19:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.888 01:19:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.888 01:19:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.888 01:19:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.888 [2024-09-28 01:19:29.683988] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:33.888 [2024-09-28 01:19:29.684146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60164 ] 00:06:34.147 [2024-09-28 01:19:29.866477] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60146 has claimed it. 00:06:34.147 [2024-09-28 01:19:29.866559] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:34.404 ERROR: process (pid: 60164) is no longer running 00:06:34.404 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60164) - No such process 00:06:34.404 01:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.404 01:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:34.405 01:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:34.405 01:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.405 01:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:34.405 01:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.405 01:19:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:34.405 01:19:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:34.405 01:19:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:34.405 01:19:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:34.405 01:19:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60146 00:06:34.405 01:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 60146 ']' 00:06:34.405 01:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 60146 00:06:34.405 01:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:34.405 01:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.405 01:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60146 00:06:34.663 01:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.663 01:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.663 killing process with pid 60146 00:06:34.663 01:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60146' 00:06:34.663 01:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 60146 00:06:34.663 01:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 60146 00:06:36.566 00:06:36.566 real 0m3.992s 00:06:36.566 user 0m10.574s 00:06:36.566 sys 0m0.559s 00:06:36.566 01:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.566 01:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.566 ************************************ 00:06:36.566 END TEST locking_overlapped_coremask 00:06:36.566 ************************************ 00:06:36.566 01:19:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:36.566 01:19:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.566 01:19:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.566 01:19:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.566 ************************************ 00:06:36.566 START TEST locking_overlapped_coremask_via_rpc 00:06:36.566 ************************************ 00:06:36.566 01:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:36.566 01:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60223 00:06:36.566 01:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60223 /var/tmp/spdk.sock 00:06:36.566 01:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60223 ']' 00:06:36.566 01:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:36.566 01:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.566 01:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.566 01:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.566 01:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.566 01:19:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.825 [2024-09-28 01:19:32.563785] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:36.825 [2024-09-28 01:19:32.563979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60223 ] 00:06:36.825 [2024-09-28 01:19:32.735442] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.825 [2024-09-28 01:19:32.735533] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.084 [2024-09-28 01:19:32.942803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.084 [2024-09-28 01:19:32.942917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.084 [2024-09-28 01:19:32.942927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.342 [2024-09-28 01:19:33.170487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.907 01:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.907 01:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:37.907 01:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60246 00:06:37.907 01:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:37.907 01:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60246 /var/tmp/spdk2.sock 00:06:37.907 01:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60246 ']' 00:06:37.907 01:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.907 01:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.907 01:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.907 01:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.907 01:19:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.907 [2024-09-28 01:19:33.762572] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:37.908 [2024-09-28 01:19:33.762749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60246 ] 00:06:38.166 [2024-09-28 01:19:33.927526] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.166 [2024-09-28 01:19:33.931586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.423 [2024-09-28 01:19:34.276509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.423 [2024-09-28 01:19:34.279654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.423 [2024-09-28 01:19:34.279680] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:38.988 [2024-09-28 01:19:34.701563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.923 [2024-09-28 01:19:35.704715] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60223 has claimed it. 00:06:39.923 request: 00:06:39.923 { 00:06:39.923 "method": "framework_enable_cpumask_locks", 00:06:39.923 "req_id": 1 00:06:39.923 } 00:06:39.923 Got JSON-RPC error response 00:06:39.923 response: 00:06:39.923 { 00:06:39.923 "code": -32603, 00:06:39.923 "message": "Failed to claim CPU core: 2" 00:06:39.923 } 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60223 /var/tmp/spdk.sock 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60223 ']' 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.923 01:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.188 01:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.188 01:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:40.188 01:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60246 /var/tmp/spdk2.sock 00:06:40.188 01:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60246 ']' 00:06:40.188 01:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.188 01:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.188 01:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.188 01:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.188 01:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.451 ************************************ 00:06:40.451 END TEST locking_overlapped_coremask_via_rpc 00:06:40.451 ************************************ 00:06:40.451 01:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.451 01:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:40.451 01:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:40.451 01:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:40.451 01:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:40.451 01:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:40.451 00:06:40.451 real 0m3.897s 00:06:40.451 user 0m1.556s 00:06:40.451 sys 0m0.173s 00:06:40.451 01:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.451 01:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.451 01:19:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:40.451 01:19:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60223 ]] 00:06:40.451 01:19:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60223 00:06:40.451 01:19:36 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60223 ']' 00:06:40.451 01:19:36 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60223 00:06:40.451 01:19:36 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:40.451 01:19:36 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.451 01:19:36 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60223 00:06:40.451 killing process with pid 60223 00:06:40.451 01:19:36 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.451 01:19:36 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.451 01:19:36 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60223' 00:06:40.451 01:19:36 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60223 00:06:40.451 01:19:36 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60223 00:06:42.989 01:19:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60246 ]] 00:06:42.989 01:19:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60246 00:06:42.989 01:19:38 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60246 ']' 00:06:42.989 01:19:38 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60246 00:06:42.989 01:19:38 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:42.989 01:19:38 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.989 01:19:38 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60246 00:06:42.989 killing process with pid 60246 00:06:42.989 01:19:38 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:42.989 01:19:38 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:42.989 01:19:38 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60246' 00:06:42.989 01:19:38 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60246 00:06:42.989 01:19:38 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60246 00:06:45.535 01:19:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:45.535 01:19:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:45.535 01:19:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60223 ]] 00:06:45.535 01:19:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60223 00:06:45.535 Process with pid 60223 is not found 00:06:45.535 01:19:41 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60223 ']' 00:06:45.535 01:19:41 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60223 00:06:45.535 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60223) - No such process 00:06:45.535 01:19:41 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60223 is not found' 00:06:45.535 01:19:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60246 ]] 00:06:45.535 01:19:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60246 00:06:45.535 01:19:41 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60246 ']' 00:06:45.535 01:19:41 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60246 00:06:45.535 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60246) - No such process 00:06:45.535 Process with pid 60246 is not found 00:06:45.535 01:19:41 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60246 is not found' 00:06:45.535 01:19:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:45.535 00:06:45.535 real 0m44.084s 00:06:45.535 user 1m16.697s 00:06:45.535 sys 0m6.096s 00:06:45.535 01:19:41 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.535 01:19:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.535 ************************************ 00:06:45.535 END TEST cpu_locks 00:06:45.535 ************************************ 00:06:45.535 ************************************ 00:06:45.535 END TEST event 00:06:45.535 ************************************ 00:06:45.535 00:06:45.535 real 1m17.976s 00:06:45.535 user 2m26.861s 00:06:45.535 sys 0m9.810s 00:06:45.535 01:19:41 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.535 01:19:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.535 01:19:41 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:45.535 01:19:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.535 01:19:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.535 01:19:41 -- common/autotest_common.sh@10 -- # set +x 00:06:45.535 ************************************ 00:06:45.535 START TEST thread 00:06:45.535 ************************************ 00:06:45.535 01:19:41 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:45.535 * Looking for test storage... 00:06:45.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:45.535 01:19:41 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:45.535 01:19:41 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:45.535 01:19:41 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:45.535 01:19:41 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:45.535 01:19:41 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.535 01:19:41 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.535 01:19:41 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.535 01:19:41 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.535 01:19:41 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.535 01:19:41 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.535 01:19:41 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.535 01:19:41 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.535 01:19:41 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.535 01:19:41 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.535 01:19:41 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.535 01:19:41 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:45.535 01:19:41 thread -- scripts/common.sh@345 -- # : 1 00:06:45.535 01:19:41 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.536 01:19:41 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.536 01:19:41 thread -- scripts/common.sh@365 -- # decimal 1 00:06:45.536 01:19:41 thread -- scripts/common.sh@353 -- # local d=1 00:06:45.536 01:19:41 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.536 01:19:41 thread -- scripts/common.sh@355 -- # echo 1 00:06:45.536 01:19:41 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.536 01:19:41 thread -- scripts/common.sh@366 -- # decimal 2 00:06:45.795 01:19:41 thread -- scripts/common.sh@353 -- # local d=2 00:06:45.795 01:19:41 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.795 01:19:41 thread -- scripts/common.sh@355 -- # echo 2 00:06:45.795 01:19:41 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.795 01:19:41 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.795 01:19:41 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.795 01:19:41 thread -- scripts/common.sh@368 -- # return 0 00:06:45.795 01:19:41 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.795 01:19:41 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:45.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.795 --rc genhtml_branch_coverage=1 00:06:45.795 --rc genhtml_function_coverage=1 00:06:45.795 --rc genhtml_legend=1 00:06:45.795 --rc geninfo_all_blocks=1 00:06:45.795 --rc geninfo_unexecuted_blocks=1 00:06:45.795 00:06:45.795 ' 00:06:45.795 01:19:41 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:45.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.795 --rc genhtml_branch_coverage=1 00:06:45.795 --rc genhtml_function_coverage=1 00:06:45.795 --rc genhtml_legend=1 00:06:45.795 --rc geninfo_all_blocks=1 00:06:45.795 --rc geninfo_unexecuted_blocks=1 00:06:45.795 00:06:45.795 ' 00:06:45.795 01:19:41 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:45.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.795 --rc genhtml_branch_coverage=1 00:06:45.795 --rc genhtml_function_coverage=1 00:06:45.795 --rc genhtml_legend=1 00:06:45.795 --rc geninfo_all_blocks=1 00:06:45.795 --rc geninfo_unexecuted_blocks=1 00:06:45.795 00:06:45.795 ' 00:06:45.795 01:19:41 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:45.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.795 --rc genhtml_branch_coverage=1 00:06:45.795 --rc genhtml_function_coverage=1 00:06:45.795 --rc genhtml_legend=1 00:06:45.795 --rc geninfo_all_blocks=1 00:06:45.795 --rc geninfo_unexecuted_blocks=1 00:06:45.795 00:06:45.795 ' 00:06:45.795 01:19:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.795 01:19:41 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:45.795 01:19:41 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.795 01:19:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.795 ************************************ 00:06:45.795 START TEST thread_poller_perf 00:06:45.795 ************************************ 00:06:45.795 01:19:41 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.795 [2024-09-28 01:19:41.524660] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:45.795 [2024-09-28 01:19:41.524822] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60434 ] 00:06:45.795 [2024-09-28 01:19:41.693333] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.054 [2024-09-28 01:19:41.917257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.054 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:47.432 ====================================== 00:06:47.432 busy:2215077046 (cyc) 00:06:47.432 total_run_count: 265000 00:06:47.432 tsc_hz: 2200000000 (cyc) 00:06:47.432 ====================================== 00:06:47.432 poller_cost: 8358 (cyc), 3799 (nsec) 00:06:47.432 00:06:47.432 real 0m1.865s 00:06:47.432 user 0m1.659s 00:06:47.432 sys 0m0.096s 00:06:47.432 01:19:43 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.432 01:19:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.432 ************************************ 00:06:47.432 END TEST thread_poller_perf 00:06:47.432 ************************************ 00:06:47.691 01:19:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:47.691 01:19:43 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:47.691 01:19:43 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.691 01:19:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.691 ************************************ 00:06:47.691 START TEST thread_poller_perf 00:06:47.691 ************************************ 00:06:47.691 01:19:43 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:47.691 [2024-09-28 01:19:43.443416] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:47.691 [2024-09-28 01:19:43.443594] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60476 ] 00:06:47.691 [2024-09-28 01:19:43.616069] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.951 [2024-09-28 01:19:43.824371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.951 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:49.328 ====================================== 00:06:49.328 busy:2204202977 (cyc) 00:06:49.328 total_run_count: 3426000 00:06:49.328 tsc_hz: 2200000000 (cyc) 00:06:49.328 ====================================== 00:06:49.328 poller_cost: 643 (cyc), 292 (nsec) 00:06:49.328 00:06:49.328 real 0m1.843s 00:06:49.328 user 0m1.629s 00:06:49.328 sys 0m0.103s 00:06:49.328 01:19:45 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.328 ************************************ 00:06:49.328 END TEST thread_poller_perf 00:06:49.328 ************************************ 00:06:49.328 01:19:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:49.587 01:19:45 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:49.587 00:06:49.587 real 0m3.996s 00:06:49.587 user 0m3.433s 00:06:49.587 sys 0m0.342s 00:06:49.587 01:19:45 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.587 01:19:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.587 ************************************ 00:06:49.587 END TEST thread 00:06:49.587 ************************************ 00:06:49.587 01:19:45 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:49.587 01:19:45 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:49.587 01:19:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.587 01:19:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.587 01:19:45 -- common/autotest_common.sh@10 -- # set +x 00:06:49.587 ************************************ 00:06:49.587 START TEST app_cmdline 00:06:49.587 ************************************ 00:06:49.587 01:19:45 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:49.587 * Looking for test storage... 00:06:49.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:49.587 01:19:45 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:49.587 01:19:45 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:49.587 01:19:45 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:49.587 01:19:45 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.587 01:19:45 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:49.588 01:19:45 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:49.588 01:19:45 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.588 01:19:45 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:49.588 01:19:45 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.588 01:19:45 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.588 01:19:45 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.588 01:19:45 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:49.588 01:19:45 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.588 01:19:45 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:49.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.588 --rc genhtml_branch_coverage=1 00:06:49.588 --rc genhtml_function_coverage=1 00:06:49.588 --rc genhtml_legend=1 00:06:49.588 --rc geninfo_all_blocks=1 00:06:49.588 --rc geninfo_unexecuted_blocks=1 00:06:49.588 00:06:49.588 ' 00:06:49.588 01:19:45 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:49.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.588 --rc genhtml_branch_coverage=1 00:06:49.588 --rc genhtml_function_coverage=1 00:06:49.588 --rc genhtml_legend=1 00:06:49.588 --rc geninfo_all_blocks=1 00:06:49.588 --rc geninfo_unexecuted_blocks=1 00:06:49.588 00:06:49.588 ' 00:06:49.588 01:19:45 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:49.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.588 --rc genhtml_branch_coverage=1 00:06:49.588 --rc genhtml_function_coverage=1 00:06:49.588 --rc genhtml_legend=1 00:06:49.588 --rc geninfo_all_blocks=1 00:06:49.588 --rc geninfo_unexecuted_blocks=1 00:06:49.588 00:06:49.588 ' 00:06:49.588 01:19:45 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:49.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.588 --rc genhtml_branch_coverage=1 00:06:49.588 --rc genhtml_function_coverage=1 00:06:49.588 --rc genhtml_legend=1 00:06:49.588 --rc geninfo_all_blocks=1 00:06:49.588 --rc geninfo_unexecuted_blocks=1 00:06:49.588 00:06:49.588 ' 00:06:49.588 01:19:45 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:49.588 01:19:45 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60565 00:06:49.588 01:19:45 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:49.588 01:19:45 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60565 00:06:49.588 01:19:45 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 60565 ']' 00:06:49.588 01:19:45 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.588 01:19:45 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.588 01:19:45 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.588 01:19:45 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.588 01:19:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.847 [2024-09-28 01:19:45.624314] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:49.847 [2024-09-28 01:19:45.624513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60565 ] 00:06:50.108 [2024-09-28 01:19:45.787918] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.108 [2024-09-28 01:19:45.985876] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.369 [2024-09-28 01:19:46.224731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.936 01:19:46 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.936 01:19:46 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:50.936 01:19:46 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:51.195 { 00:06:51.195 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:06:51.195 "fields": { 00:06:51.195 "major": 25, 00:06:51.195 "minor": 1, 00:06:51.195 "patch": 0, 00:06:51.195 "suffix": "-pre", 00:06:51.195 "commit": "09cc66129" 00:06:51.195 } 00:06:51.195 } 00:06:51.195 01:19:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:51.195 01:19:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:51.195 01:19:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:51.195 01:19:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:51.195 01:19:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:51.195 01:19:47 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.195 01:19:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:51.195 01:19:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:51.195 01:19:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:51.195 01:19:47 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.454 01:19:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:51.454 01:19:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:51.454 01:19:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.454 01:19:47 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:51.454 01:19:47 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.454 01:19:47 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.454 01:19:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.454 01:19:47 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.454 01:19:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.454 01:19:47 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.454 01:19:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.454 01:19:47 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.454 01:19:47 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:51.454 01:19:47 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.713 request: 00:06:51.713 { 00:06:51.713 "method": "env_dpdk_get_mem_stats", 00:06:51.713 "req_id": 1 00:06:51.713 } 00:06:51.713 Got JSON-RPC error response 00:06:51.713 response: 00:06:51.713 { 00:06:51.713 "code": -32601, 00:06:51.713 "message": "Method not found" 00:06:51.713 } 00:06:51.713 01:19:47 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:51.713 01:19:47 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.713 01:19:47 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:51.713 01:19:47 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.713 01:19:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60565 00:06:51.713 01:19:47 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 60565 ']' 00:06:51.713 01:19:47 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 60565 00:06:51.713 01:19:47 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:51.713 01:19:47 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.713 01:19:47 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60565 00:06:51.713 01:19:47 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.713 01:19:47 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.713 killing process with pid 60565 00:06:51.713 01:19:47 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60565' 00:06:51.713 01:19:47 app_cmdline -- common/autotest_common.sh@969 -- # kill 60565 00:06:51.713 01:19:47 app_cmdline -- common/autotest_common.sh@974 -- # wait 60565 00:06:53.619 00:06:53.619 real 0m4.168s 00:06:53.619 user 0m4.819s 00:06:53.619 sys 0m0.544s 00:06:53.619 01:19:49 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.619 01:19:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:53.619 ************************************ 00:06:53.619 END TEST app_cmdline 00:06:53.619 ************************************ 00:06:53.619 01:19:49 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:53.619 01:19:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.619 01:19:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.619 01:19:49 -- common/autotest_common.sh@10 -- # set +x 00:06:53.877 ************************************ 00:06:53.877 START TEST version 00:06:53.877 ************************************ 00:06:53.877 01:19:49 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:53.877 * Looking for test storage... 00:06:53.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:53.877 01:19:49 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:53.877 01:19:49 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:53.877 01:19:49 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:53.877 01:19:49 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:53.877 01:19:49 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.877 01:19:49 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.877 01:19:49 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.877 01:19:49 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.877 01:19:49 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.877 01:19:49 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.877 01:19:49 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.877 01:19:49 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.877 01:19:49 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.877 01:19:49 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.877 01:19:49 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.877 01:19:49 version -- scripts/common.sh@344 -- # case "$op" in 00:06:53.877 01:19:49 version -- scripts/common.sh@345 -- # : 1 00:06:53.877 01:19:49 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.877 01:19:49 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.877 01:19:49 version -- scripts/common.sh@365 -- # decimal 1 00:06:53.877 01:19:49 version -- scripts/common.sh@353 -- # local d=1 00:06:53.877 01:19:49 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.877 01:19:49 version -- scripts/common.sh@355 -- # echo 1 00:06:53.877 01:19:49 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.877 01:19:49 version -- scripts/common.sh@366 -- # decimal 2 00:06:53.877 01:19:49 version -- scripts/common.sh@353 -- # local d=2 00:06:53.877 01:19:49 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.877 01:19:49 version -- scripts/common.sh@355 -- # echo 2 00:06:53.877 01:19:49 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.877 01:19:49 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.877 01:19:49 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.877 01:19:49 version -- scripts/common.sh@368 -- # return 0 00:06:53.877 01:19:49 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.877 01:19:49 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:53.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.878 --rc genhtml_branch_coverage=1 00:06:53.878 --rc genhtml_function_coverage=1 00:06:53.878 --rc genhtml_legend=1 00:06:53.878 --rc geninfo_all_blocks=1 00:06:53.878 --rc geninfo_unexecuted_blocks=1 00:06:53.878 00:06:53.878 ' 00:06:53.878 01:19:49 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:53.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.878 --rc genhtml_branch_coverage=1 00:06:53.878 --rc genhtml_function_coverage=1 00:06:53.878 --rc genhtml_legend=1 00:06:53.878 --rc geninfo_all_blocks=1 00:06:53.878 --rc geninfo_unexecuted_blocks=1 00:06:53.878 00:06:53.878 ' 00:06:53.878 01:19:49 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:53.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.878 --rc genhtml_branch_coverage=1 00:06:53.878 --rc genhtml_function_coverage=1 00:06:53.878 --rc genhtml_legend=1 00:06:53.878 --rc geninfo_all_blocks=1 00:06:53.878 --rc geninfo_unexecuted_blocks=1 00:06:53.878 00:06:53.878 ' 00:06:53.878 01:19:49 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:53.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.878 --rc genhtml_branch_coverage=1 00:06:53.878 --rc genhtml_function_coverage=1 00:06:53.878 --rc genhtml_legend=1 00:06:53.878 --rc geninfo_all_blocks=1 00:06:53.878 --rc geninfo_unexecuted_blocks=1 00:06:53.878 00:06:53.878 ' 00:06:53.878 01:19:49 version -- app/version.sh@17 -- # get_header_version major 00:06:53.878 01:19:49 version -- app/version.sh@14 -- # cut -f2 00:06:53.878 01:19:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:53.878 01:19:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.878 01:19:49 version -- app/version.sh@17 -- # major=25 00:06:53.878 01:19:49 version -- app/version.sh@18 -- # get_header_version minor 00:06:53.878 01:19:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:53.878 01:19:49 version -- app/version.sh@14 -- # cut -f2 00:06:53.878 01:19:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.878 01:19:49 version -- app/version.sh@18 -- # minor=1 00:06:53.878 01:19:49 version -- app/version.sh@19 -- # get_header_version patch 00:06:53.878 01:19:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:53.878 01:19:49 version -- app/version.sh@14 -- # cut -f2 00:06:53.878 01:19:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.878 01:19:49 version -- app/version.sh@19 -- # patch=0 00:06:53.878 01:19:49 version -- app/version.sh@20 -- # get_header_version suffix 00:06:53.878 01:19:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:53.878 01:19:49 version -- app/version.sh@14 -- # cut -f2 00:06:53.878 01:19:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.878 01:19:49 version -- app/version.sh@20 -- # suffix=-pre 00:06:53.878 01:19:49 version -- app/version.sh@22 -- # version=25.1 00:06:53.878 01:19:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:53.878 01:19:49 version -- app/version.sh@28 -- # version=25.1rc0 00:06:53.878 01:19:49 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:53.878 01:19:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:54.136 01:19:49 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:54.137 01:19:49 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:54.137 00:06:54.137 real 0m0.260s 00:06:54.137 user 0m0.186s 00:06:54.137 sys 0m0.113s 00:06:54.137 01:19:49 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.137 01:19:49 version -- common/autotest_common.sh@10 -- # set +x 00:06:54.137 ************************************ 00:06:54.137 END TEST version 00:06:54.137 ************************************ 00:06:54.137 01:19:49 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:54.137 01:19:49 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:54.137 01:19:49 -- spdk/autotest.sh@194 -- # uname -s 00:06:54.137 01:19:49 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:54.137 01:19:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:54.137 01:19:49 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:54.137 01:19:49 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:54.137 01:19:49 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:54.137 01:19:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.137 01:19:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.137 01:19:49 -- common/autotest_common.sh@10 -- # set +x 00:06:54.137 ************************************ 00:06:54.137 START TEST spdk_dd 00:06:54.137 ************************************ 00:06:54.137 01:19:49 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:54.137 * Looking for test storage... 00:06:54.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:54.137 01:19:49 spdk_dd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:54.137 01:19:49 spdk_dd -- common/autotest_common.sh@1681 -- # lcov --version 00:06:54.137 01:19:49 spdk_dd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:54.137 01:19:50 spdk_dd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:54.137 01:19:50 spdk_dd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.137 01:19:50 spdk_dd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:54.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.137 --rc genhtml_branch_coverage=1 00:06:54.137 --rc genhtml_function_coverage=1 00:06:54.137 --rc genhtml_legend=1 00:06:54.137 --rc geninfo_all_blocks=1 00:06:54.137 --rc geninfo_unexecuted_blocks=1 00:06:54.137 00:06:54.137 ' 00:06:54.137 01:19:50 spdk_dd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:54.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.137 --rc genhtml_branch_coverage=1 00:06:54.137 --rc genhtml_function_coverage=1 00:06:54.137 --rc genhtml_legend=1 00:06:54.137 --rc geninfo_all_blocks=1 00:06:54.137 --rc geninfo_unexecuted_blocks=1 00:06:54.137 00:06:54.137 ' 00:06:54.137 01:19:50 spdk_dd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:54.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.137 --rc genhtml_branch_coverage=1 00:06:54.137 --rc genhtml_function_coverage=1 00:06:54.137 --rc genhtml_legend=1 00:06:54.137 --rc geninfo_all_blocks=1 00:06:54.137 --rc geninfo_unexecuted_blocks=1 00:06:54.137 00:06:54.137 ' 00:06:54.137 01:19:50 spdk_dd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:54.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.137 --rc genhtml_branch_coverage=1 00:06:54.137 --rc genhtml_function_coverage=1 00:06:54.137 --rc genhtml_legend=1 00:06:54.137 --rc geninfo_all_blocks=1 00:06:54.137 --rc geninfo_unexecuted_blocks=1 00:06:54.137 00:06:54.137 ' 00:06:54.137 01:19:50 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:54.137 01:19:50 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:54.395 01:19:50 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.395 01:19:50 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.395 01:19:50 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.395 01:19:50 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.395 01:19:50 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.395 01:19:50 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.395 01:19:50 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:54.395 01:19:50 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.395 01:19:50 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:54.704 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:54.704 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:54.704 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:54.704 01:19:50 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:54.704 01:19:50 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:54.704 01:19:50 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:54.704 01:19:50 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:54.704 01:19:50 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:54.704 01:19:50 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:54.704 01:19:50 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:54.704 01:19:50 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:54.704 01:19:50 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:54.704 01:19:50 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:54.704 01:19:50 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:54.704 01:19:50 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:54.704 01:19:50 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:54.704 01:19:50 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:54.704 01:19:50 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:54.704 01:19:50 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:54.704 01:19:50 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:54.704 01:19:50 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:54.704 01:19:50 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:54.705 01:19:50 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:54.705 01:19:50 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:54.705 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fuse_dispatcher.so.1.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:54.706 * spdk_dd linked to liburing 00:06:54.706 01:19:50 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:54.707 01:19:50 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:06:54.707 01:19:50 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:06:54.707 01:19:50 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:54.707 01:19:50 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:54.707 01:19:50 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:54.707 01:19:50 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:54.707 01:19:50 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:54.707 01:19:50 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:54.707 01:19:50 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:54.707 01:19:50 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.707 01:19:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:54.707 ************************************ 00:06:54.707 START TEST spdk_dd_basic_rw 00:06:54.707 ************************************ 00:06:54.707 01:19:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:54.967 * Looking for test storage... 00:06:54.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lcov --version 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:54.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.967 --rc genhtml_branch_coverage=1 00:06:54.967 --rc genhtml_function_coverage=1 00:06:54.967 --rc genhtml_legend=1 00:06:54.967 --rc geninfo_all_blocks=1 00:06:54.967 --rc geninfo_unexecuted_blocks=1 00:06:54.967 00:06:54.967 ' 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:54.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.967 --rc genhtml_branch_coverage=1 00:06:54.967 --rc genhtml_function_coverage=1 00:06:54.967 --rc genhtml_legend=1 00:06:54.967 --rc geninfo_all_blocks=1 00:06:54.967 --rc geninfo_unexecuted_blocks=1 00:06:54.967 00:06:54.967 ' 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:54.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.967 --rc genhtml_branch_coverage=1 00:06:54.967 --rc genhtml_function_coverage=1 00:06:54.967 --rc genhtml_legend=1 00:06:54.967 --rc geninfo_all_blocks=1 00:06:54.967 --rc geninfo_unexecuted_blocks=1 00:06:54.967 00:06:54.967 ' 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:54.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.967 --rc genhtml_branch_coverage=1 00:06:54.967 --rc genhtml_function_coverage=1 00:06:54.967 --rc genhtml_legend=1 00:06:54.967 --rc geninfo_all_blocks=1 00:06:54.967 --rc geninfo_unexecuted_blocks=1 00:06:54.967 00:06:54.967 ' 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.967 01:19:50 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.968 01:19:50 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.968 01:19:50 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.968 01:19:50 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:54.968 01:19:50 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.968 01:19:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:54.968 01:19:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:54.968 01:19:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:54.968 01:19:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:54.968 01:19:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:54.968 01:19:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:54.968 01:19:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:54.968 01:19:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:54.968 01:19:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.968 01:19:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:54.968 01:19:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:54.968 01:19:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:54.968 01:19:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:55.229 01:19:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:55.229 01:19:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:55.230 ************************************ 00:06:55.230 START TEST dd_bs_lt_native_bs 00:06:55.230 ************************************ 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:55.230 01:19:51 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:55.230 { 00:06:55.230 "subsystems": [ 00:06:55.230 { 00:06:55.230 "subsystem": "bdev", 00:06:55.230 "config": [ 00:06:55.230 { 00:06:55.230 "params": { 00:06:55.230 "trtype": "pcie", 00:06:55.230 "traddr": "0000:00:10.0", 00:06:55.230 "name": "Nvme0" 00:06:55.230 }, 00:06:55.230 "method": "bdev_nvme_attach_controller" 00:06:55.230 }, 00:06:55.230 { 00:06:55.230 "method": "bdev_wait_for_examine" 00:06:55.230 } 00:06:55.230 ] 00:06:55.230 } 00:06:55.230 ] 00:06:55.230 } 00:06:55.230 [2024-09-28 01:19:51.144841] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:55.230 [2024-09-28 01:19:51.145017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60935 ] 00:06:55.488 [2024-09-28 01:19:51.322547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.747 [2024-09-28 01:19:51.554007] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.006 [2024-09-28 01:19:51.734301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.006 [2024-09-28 01:19:51.887061] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:56.006 [2024-09-28 01:19:51.887181] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.575 [2024-09-28 01:19:52.311482] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.834 00:06:56.834 real 0m1.654s 00:06:56.834 user 0m1.397s 00:06:56.834 sys 0m0.203s 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.834 ************************************ 00:06:56.834 END TEST dd_bs_lt_native_bs 00:06:56.834 ************************************ 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:56.834 ************************************ 00:06:56.834 START TEST dd_rw 00:06:56.834 ************************************ 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:56.834 01:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:57.402 01:19:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:57.402 01:19:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:57.402 01:19:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:57.402 01:19:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:57.402 { 00:06:57.402 "subsystems": [ 00:06:57.402 { 00:06:57.402 "subsystem": "bdev", 00:06:57.402 "config": [ 00:06:57.402 { 00:06:57.402 "params": { 00:06:57.402 "trtype": "pcie", 00:06:57.402 "traddr": "0000:00:10.0", 00:06:57.402 "name": "Nvme0" 00:06:57.402 }, 00:06:57.402 "method": "bdev_nvme_attach_controller" 00:06:57.402 }, 00:06:57.402 { 00:06:57.402 "method": "bdev_wait_for_examine" 00:06:57.402 } 00:06:57.402 ] 00:06:57.402 } 00:06:57.402 ] 00:06:57.402 } 00:06:57.661 [2024-09-28 01:19:53.340676] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:57.661 [2024-09-28 01:19:53.340855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60979 ] 00:06:57.661 [2024-09-28 01:19:53.514962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.919 [2024-09-28 01:19:53.681272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.919 [2024-09-28 01:19:53.831405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.115  Copying: 60/60 [kB] (average 19 MBps) 00:06:59.115 00:06:59.115 01:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:59.115 01:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:59.115 01:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:59.115 01:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.373 { 00:06:59.373 "subsystems": [ 00:06:59.373 { 00:06:59.373 "subsystem": "bdev", 00:06:59.373 "config": [ 00:06:59.373 { 00:06:59.373 "params": { 00:06:59.373 "trtype": "pcie", 00:06:59.373 "traddr": "0000:00:10.0", 00:06:59.373 "name": "Nvme0" 00:06:59.373 }, 00:06:59.373 "method": "bdev_nvme_attach_controller" 00:06:59.373 }, 00:06:59.373 { 00:06:59.373 "method": "bdev_wait_for_examine" 00:06:59.373 } 00:06:59.373 ] 00:06:59.373 } 00:06:59.373 ] 00:06:59.373 } 00:06:59.373 [2024-09-28 01:19:55.102892] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:59.373 [2024-09-28 01:19:55.103071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61013 ] 00:06:59.373 [2024-09-28 01:19:55.275936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.632 [2024-09-28 01:19:55.442846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.890 [2024-09-28 01:19:55.609984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.826  Copying: 60/60 [kB] (average 19 MBps) 00:07:00.826 00:07:00.826 01:19:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.826 01:19:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:00.826 01:19:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:00.826 01:19:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:00.826 01:19:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:00.826 01:19:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:00.826 01:19:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:00.826 01:19:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:00.826 01:19:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:00.826 01:19:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:00.826 01:19:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:00.826 { 00:07:00.826 "subsystems": [ 00:07:00.826 { 00:07:00.826 "subsystem": "bdev", 00:07:00.826 "config": [ 00:07:00.826 { 00:07:00.826 "params": { 00:07:00.826 "trtype": "pcie", 00:07:00.826 "traddr": "0000:00:10.0", 00:07:00.826 "name": "Nvme0" 00:07:00.826 }, 00:07:00.826 "method": "bdev_nvme_attach_controller" 00:07:00.826 }, 00:07:00.826 { 00:07:00.826 "method": "bdev_wait_for_examine" 00:07:00.826 } 00:07:00.826 ] 00:07:00.826 } 00:07:00.826 ] 00:07:00.826 } 00:07:00.826 [2024-09-28 01:19:56.696033] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:00.826 [2024-09-28 01:19:56.696217] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61041 ] 00:07:01.084 [2024-09-28 01:19:56.869399] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.343 [2024-09-28 01:19:57.039606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.343 [2024-09-28 01:19:57.210433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.537  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:02.537 00:07:02.537 01:19:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:02.537 01:19:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:02.537 01:19:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:02.537 01:19:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:02.537 01:19:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:02.537 01:19:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:02.537 01:19:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:03.104 01:19:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:03.104 01:19:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:03.104 01:19:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:03.104 01:19:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:03.104 { 00:07:03.104 "subsystems": [ 00:07:03.104 { 00:07:03.104 "subsystem": "bdev", 00:07:03.104 "config": [ 00:07:03.104 { 00:07:03.104 "params": { 00:07:03.104 "trtype": "pcie", 00:07:03.104 "traddr": "0000:00:10.0", 00:07:03.104 "name": "Nvme0" 00:07:03.104 }, 00:07:03.104 "method": "bdev_nvme_attach_controller" 00:07:03.104 }, 00:07:03.104 { 00:07:03.104 "method": "bdev_wait_for_examine" 00:07:03.104 } 00:07:03.104 ] 00:07:03.104 } 00:07:03.104 ] 00:07:03.104 } 00:07:03.104 [2024-09-28 01:19:59.002561] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:03.104 [2024-09-28 01:19:59.002781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61077 ] 00:07:03.363 [2024-09-28 01:19:59.173591] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.622 [2024-09-28 01:19:59.323126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.622 [2024-09-28 01:19:59.473109] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.820  Copying: 60/60 [kB] (average 58 MBps) 00:07:04.820 00:07:04.820 01:20:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:04.820 01:20:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:04.820 01:20:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:04.820 01:20:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.820 { 00:07:04.820 "subsystems": [ 00:07:04.820 { 00:07:04.820 "subsystem": "bdev", 00:07:04.820 "config": [ 00:07:04.820 { 00:07:04.820 "params": { 00:07:04.820 "trtype": "pcie", 00:07:04.820 "traddr": "0000:00:10.0", 00:07:04.820 "name": "Nvme0" 00:07:04.820 }, 00:07:04.820 "method": "bdev_nvme_attach_controller" 00:07:04.820 }, 00:07:04.820 { 00:07:04.820 "method": "bdev_wait_for_examine" 00:07:04.820 } 00:07:04.820 ] 00:07:04.820 } 00:07:04.820 ] 00:07:04.820 } 00:07:04.820 [2024-09-28 01:20:00.599914] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:04.820 [2024-09-28 01:20:00.600096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61097 ] 00:07:05.079 [2024-09-28 01:20:00.766422] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.079 [2024-09-28 01:20:00.919447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.339 [2024-09-28 01:20:01.076072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.275  Copying: 60/60 [kB] (average 58 MBps) 00:07:06.275 00:07:06.275 01:20:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:06.275 01:20:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:06.275 01:20:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:06.275 01:20:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:06.275 01:20:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:06.275 01:20:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:06.275 01:20:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:06.275 01:20:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:06.275 01:20:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:06.275 01:20:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:06.275 01:20:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.534 { 00:07:06.534 "subsystems": [ 00:07:06.534 { 00:07:06.534 "subsystem": "bdev", 00:07:06.534 "config": [ 00:07:06.534 { 00:07:06.534 "params": { 00:07:06.534 "trtype": "pcie", 00:07:06.534 "traddr": "0000:00:10.0", 00:07:06.534 "name": "Nvme0" 00:07:06.534 }, 00:07:06.534 "method": "bdev_nvme_attach_controller" 00:07:06.534 }, 00:07:06.534 { 00:07:06.535 "method": "bdev_wait_for_examine" 00:07:06.535 } 00:07:06.535 ] 00:07:06.535 } 00:07:06.535 ] 00:07:06.535 } 00:07:06.535 [2024-09-28 01:20:02.305431] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:06.535 [2024-09-28 01:20:02.305618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61130 ] 00:07:06.793 [2024-09-28 01:20:02.475639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.793 [2024-09-28 01:20:02.630913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.052 [2024-09-28 01:20:02.775414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.991  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:07.991 00:07:07.991 01:20:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:07.991 01:20:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:07.991 01:20:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:07.991 01:20:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:07.991 01:20:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:07.991 01:20:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:07.991 01:20:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:07.991 01:20:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.559 01:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:08.559 01:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:08.559 01:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:08.559 01:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.559 { 00:07:08.559 "subsystems": [ 00:07:08.559 { 00:07:08.559 "subsystem": "bdev", 00:07:08.559 "config": [ 00:07:08.559 { 00:07:08.559 "params": { 00:07:08.559 "trtype": "pcie", 00:07:08.559 "traddr": "0000:00:10.0", 00:07:08.559 "name": "Nvme0" 00:07:08.559 }, 00:07:08.559 "method": "bdev_nvme_attach_controller" 00:07:08.559 }, 00:07:08.559 { 00:07:08.559 "method": "bdev_wait_for_examine" 00:07:08.559 } 00:07:08.559 ] 00:07:08.559 } 00:07:08.559 ] 00:07:08.559 } 00:07:08.559 [2024-09-28 01:20:04.323464] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:08.559 [2024-09-28 01:20:04.323656] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61161 ] 00:07:08.818 [2024-09-28 01:20:04.493022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.818 [2024-09-28 01:20:04.648020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.078 [2024-09-28 01:20:04.797055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.014  Copying: 56/56 [kB] (average 27 MBps) 00:07:10.014 00:07:10.014 01:20:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:10.014 01:20:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:10.014 01:20:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:10.014 01:20:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.272 { 00:07:10.272 "subsystems": [ 00:07:10.272 { 00:07:10.272 "subsystem": "bdev", 00:07:10.272 "config": [ 00:07:10.272 { 00:07:10.272 "params": { 00:07:10.272 "trtype": "pcie", 00:07:10.272 "traddr": "0000:00:10.0", 00:07:10.272 "name": "Nvme0" 00:07:10.272 }, 00:07:10.272 "method": "bdev_nvme_attach_controller" 00:07:10.272 }, 00:07:10.272 { 00:07:10.272 "method": "bdev_wait_for_examine" 00:07:10.272 } 00:07:10.272 ] 00:07:10.272 } 00:07:10.272 ] 00:07:10.272 } 00:07:10.272 [2024-09-28 01:20:06.046910] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:10.272 [2024-09-28 01:20:06.047207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61187 ] 00:07:10.530 [2024-09-28 01:20:06.218340] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.530 [2024-09-28 01:20:06.368076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.789 [2024-09-28 01:20:06.520513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.725  Copying: 56/56 [kB] (average 18 MBps) 00:07:11.726 00:07:11.726 01:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:11.726 01:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:11.726 01:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:11.726 01:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:11.726 01:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:11.726 01:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:11.726 01:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:11.726 01:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:11.726 01:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:11.726 01:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:11.726 01:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.726 { 00:07:11.726 "subsystems": [ 00:07:11.726 { 00:07:11.726 "subsystem": "bdev", 00:07:11.726 "config": [ 00:07:11.726 { 00:07:11.726 "params": { 00:07:11.726 "trtype": "pcie", 00:07:11.726 "traddr": "0000:00:10.0", 00:07:11.726 "name": "Nvme0" 00:07:11.726 }, 00:07:11.726 "method": "bdev_nvme_attach_controller" 00:07:11.726 }, 00:07:11.726 { 00:07:11.726 "method": "bdev_wait_for_examine" 00:07:11.726 } 00:07:11.726 ] 00:07:11.726 } 00:07:11.726 ] 00:07:11.726 } 00:07:11.726 [2024-09-28 01:20:07.638382] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:11.726 [2024-09-28 01:20:07.638588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61214 ] 00:07:11.985 [2024-09-28 01:20:07.810437] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.244 [2024-09-28 01:20:07.963477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.244 [2024-09-28 01:20:08.118468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.440  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:13.440 00:07:13.440 01:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:13.440 01:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:13.440 01:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:13.440 01:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:13.440 01:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:13.441 01:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:13.441 01:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:14.009 01:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:14.009 01:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:14.009 01:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:14.009 01:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:14.009 { 00:07:14.009 "subsystems": [ 00:07:14.009 { 00:07:14.009 "subsystem": "bdev", 00:07:14.009 "config": [ 00:07:14.009 { 00:07:14.009 "params": { 00:07:14.009 "trtype": "pcie", 00:07:14.009 "traddr": "0000:00:10.0", 00:07:14.009 "name": "Nvme0" 00:07:14.009 }, 00:07:14.009 "method": "bdev_nvme_attach_controller" 00:07:14.009 }, 00:07:14.009 { 00:07:14.009 "method": "bdev_wait_for_examine" 00:07:14.009 } 00:07:14.009 ] 00:07:14.009 } 00:07:14.009 ] 00:07:14.009 } 00:07:14.009 [2024-09-28 01:20:09.831549] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:14.009 [2024-09-28 01:20:09.831753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61245 ] 00:07:14.268 [2024-09-28 01:20:10.002390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.269 [2024-09-28 01:20:10.155153] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.528 [2024-09-28 01:20:10.305476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.465  Copying: 56/56 [kB] (average 54 MBps) 00:07:15.465 00:07:15.465 01:20:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:15.465 01:20:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:15.465 01:20:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:15.465 01:20:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.465 { 00:07:15.465 "subsystems": [ 00:07:15.465 { 00:07:15.465 "subsystem": "bdev", 00:07:15.465 "config": [ 00:07:15.465 { 00:07:15.465 "params": { 00:07:15.465 "trtype": "pcie", 00:07:15.465 "traddr": "0000:00:10.0", 00:07:15.465 "name": "Nvme0" 00:07:15.465 }, 00:07:15.465 "method": "bdev_nvme_attach_controller" 00:07:15.465 }, 00:07:15.465 { 00:07:15.465 "method": "bdev_wait_for_examine" 00:07:15.465 } 00:07:15.465 ] 00:07:15.465 } 00:07:15.465 ] 00:07:15.465 } 00:07:15.465 [2024-09-28 01:20:11.368715] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:15.465 [2024-09-28 01:20:11.368855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61271 ] 00:07:15.724 [2024-09-28 01:20:11.524514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.993 [2024-09-28 01:20:11.679501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.993 [2024-09-28 01:20:11.836968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.226  Copying: 56/56 [kB] (average 54 MBps) 00:07:17.226 00:07:17.226 01:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:17.226 01:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:17.226 01:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:17.226 01:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:17.226 01:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:17.226 01:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:17.226 01:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:17.226 01:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:17.226 01:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:17.226 01:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:17.226 01:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:17.226 { 00:07:17.226 "subsystems": [ 00:07:17.226 { 00:07:17.226 "subsystem": "bdev", 00:07:17.226 "config": [ 00:07:17.226 { 00:07:17.226 "params": { 00:07:17.226 "trtype": "pcie", 00:07:17.226 "traddr": "0000:00:10.0", 00:07:17.226 "name": "Nvme0" 00:07:17.226 }, 00:07:17.226 "method": "bdev_nvme_attach_controller" 00:07:17.226 }, 00:07:17.226 { 00:07:17.226 "method": "bdev_wait_for_examine" 00:07:17.226 } 00:07:17.226 ] 00:07:17.226 } 00:07:17.226 ] 00:07:17.226 } 00:07:17.226 [2024-09-28 01:20:13.075851] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:17.226 [2024-09-28 01:20:13.076013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61304 ] 00:07:17.485 [2024-09-28 01:20:13.244880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.485 [2024-09-28 01:20:13.389614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.744 [2024-09-28 01:20:13.533015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.570  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:18.570 00:07:18.570 01:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:18.570 01:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:18.570 01:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:18.570 01:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:18.570 01:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:18.570 01:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:18.570 01:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:18.570 01:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:19.138 01:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:19.138 01:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:19.138 01:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:19.138 01:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:19.138 { 00:07:19.138 "subsystems": [ 00:07:19.138 { 00:07:19.138 "subsystem": "bdev", 00:07:19.138 "config": [ 00:07:19.138 { 00:07:19.138 "params": { 00:07:19.138 "trtype": "pcie", 00:07:19.138 "traddr": "0000:00:10.0", 00:07:19.138 "name": "Nvme0" 00:07:19.138 }, 00:07:19.138 "method": "bdev_nvme_attach_controller" 00:07:19.138 }, 00:07:19.138 { 00:07:19.138 "method": "bdev_wait_for_examine" 00:07:19.138 } 00:07:19.138 ] 00:07:19.138 } 00:07:19.138 ] 00:07:19.138 } 00:07:19.138 [2024-09-28 01:20:14.987413] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:19.138 [2024-09-28 01:20:14.987605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61334 ] 00:07:19.397 [2024-09-28 01:20:15.156761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.656 [2024-09-28 01:20:15.341345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.656 [2024-09-28 01:20:15.485436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.852  Copying: 48/48 [kB] (average 46 MBps) 00:07:20.852 00:07:20.852 01:20:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:20.852 01:20:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:20.852 01:20:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:20.852 01:20:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:20.852 { 00:07:20.852 "subsystems": [ 00:07:20.852 { 00:07:20.852 "subsystem": "bdev", 00:07:20.852 "config": [ 00:07:20.852 { 00:07:20.852 "params": { 00:07:20.852 "trtype": "pcie", 00:07:20.852 "traddr": "0000:00:10.0", 00:07:20.852 "name": "Nvme0" 00:07:20.852 }, 00:07:20.852 "method": "bdev_nvme_attach_controller" 00:07:20.852 }, 00:07:20.852 { 00:07:20.852 "method": "bdev_wait_for_examine" 00:07:20.852 } 00:07:20.852 ] 00:07:20.852 } 00:07:20.852 ] 00:07:20.852 } 00:07:20.852 [2024-09-28 01:20:16.730303] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:20.852 [2024-09-28 01:20:16.730532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61355 ] 00:07:21.111 [2024-09-28 01:20:16.896830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.371 [2024-09-28 01:20:17.065211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.371 [2024-09-28 01:20:17.229823] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.567  Copying: 48/48 [kB] (average 46 MBps) 00:07:22.567 00:07:22.567 01:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.567 01:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:22.567 01:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:22.567 01:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:22.567 01:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:22.567 01:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:22.567 01:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:22.567 01:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:22.567 01:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:22.567 01:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:22.567 01:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:22.567 { 00:07:22.567 "subsystems": [ 00:07:22.567 { 00:07:22.567 "subsystem": "bdev", 00:07:22.567 "config": [ 00:07:22.567 { 00:07:22.567 "params": { 00:07:22.567 "trtype": "pcie", 00:07:22.567 "traddr": "0000:00:10.0", 00:07:22.567 "name": "Nvme0" 00:07:22.567 }, 00:07:22.567 "method": "bdev_nvme_attach_controller" 00:07:22.567 }, 00:07:22.567 { 00:07:22.567 "method": "bdev_wait_for_examine" 00:07:22.567 } 00:07:22.567 ] 00:07:22.567 } 00:07:22.567 ] 00:07:22.567 } 00:07:22.567 [2024-09-28 01:20:18.314093] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:22.567 [2024-09-28 01:20:18.314283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61388 ] 00:07:22.567 [2024-09-28 01:20:18.478861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.825 [2024-09-28 01:20:18.640994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.084 [2024-09-28 01:20:18.792351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.019  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:24.019 00:07:24.279 01:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:24.279 01:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:24.279 01:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:24.279 01:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:24.279 01:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:24.279 01:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:24.279 01:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:24.538 01:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:24.538 01:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:24.538 01:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:24.538 01:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:24.538 { 00:07:24.538 "subsystems": [ 00:07:24.538 { 00:07:24.538 "subsystem": "bdev", 00:07:24.538 "config": [ 00:07:24.538 { 00:07:24.538 "params": { 00:07:24.538 "trtype": "pcie", 00:07:24.538 "traddr": "0000:00:10.0", 00:07:24.538 "name": "Nvme0" 00:07:24.538 }, 00:07:24.538 "method": "bdev_nvme_attach_controller" 00:07:24.538 }, 00:07:24.538 { 00:07:24.538 "method": "bdev_wait_for_examine" 00:07:24.538 } 00:07:24.538 ] 00:07:24.538 } 00:07:24.538 ] 00:07:24.538 } 00:07:24.798 [2024-09-28 01:20:20.475242] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:24.798 [2024-09-28 01:20:20.475468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61419 ] 00:07:24.798 [2024-09-28 01:20:20.645647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.057 [2024-09-28 01:20:20.796043] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.057 [2024-09-28 01:20:20.947723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.251  Copying: 48/48 [kB] (average 46 MBps) 00:07:26.251 00:07:26.251 01:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:26.251 01:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:26.251 01:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:26.251 01:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:26.251 { 00:07:26.251 "subsystems": [ 00:07:26.251 { 00:07:26.251 "subsystem": "bdev", 00:07:26.251 "config": [ 00:07:26.251 { 00:07:26.251 "params": { 00:07:26.251 "trtype": "pcie", 00:07:26.251 "traddr": "0000:00:10.0", 00:07:26.251 "name": "Nvme0" 00:07:26.251 }, 00:07:26.251 "method": "bdev_nvme_attach_controller" 00:07:26.251 }, 00:07:26.251 { 00:07:26.251 "method": "bdev_wait_for_examine" 00:07:26.251 } 00:07:26.251 ] 00:07:26.251 } 00:07:26.251 ] 00:07:26.251 } 00:07:26.251 [2024-09-28 01:20:22.119249] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:26.251 [2024-09-28 01:20:22.119426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61441 ] 00:07:26.510 [2024-09-28 01:20:22.291380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.768 [2024-09-28 01:20:22.450600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.768 [2024-09-28 01:20:22.599777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.000  Copying: 48/48 [kB] (average 46 MBps) 00:07:28.000 00:07:28.000 01:20:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:28.000 01:20:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:28.000 01:20:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:28.000 01:20:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:28.000 01:20:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:28.000 01:20:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:28.000 01:20:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:28.000 01:20:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:28.001 01:20:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:28.001 01:20:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:28.001 01:20:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:28.001 { 00:07:28.001 "subsystems": [ 00:07:28.001 { 00:07:28.001 "subsystem": "bdev", 00:07:28.001 "config": [ 00:07:28.001 { 00:07:28.001 "params": { 00:07:28.001 "trtype": "pcie", 00:07:28.001 "traddr": "0000:00:10.0", 00:07:28.001 "name": "Nvme0" 00:07:28.001 }, 00:07:28.001 "method": "bdev_nvme_attach_controller" 00:07:28.001 }, 00:07:28.001 { 00:07:28.001 "method": "bdev_wait_for_examine" 00:07:28.001 } 00:07:28.001 ] 00:07:28.001 } 00:07:28.001 ] 00:07:28.001 } 00:07:28.001 [2024-09-28 01:20:23.886334] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:28.001 [2024-09-28 01:20:23.886538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61472 ] 00:07:28.259 [2024-09-28 01:20:24.057489] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.518 [2024-09-28 01:20:24.240719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.518 [2024-09-28 01:20:24.394354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.714  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:29.714 00:07:29.714 00:07:29.714 real 0m32.670s 00:07:29.714 user 0m27.614s 00:07:29.714 sys 0m13.832s 00:07:29.714 01:20:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.714 01:20:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.714 ************************************ 00:07:29.714 END TEST dd_rw 00:07:29.714 ************************************ 00:07:29.714 01:20:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:29.714 01:20:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.714 01:20:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.714 01:20:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.714 ************************************ 00:07:29.714 START TEST dd_rw_offset 00:07:29.714 ************************************ 00:07:29.714 01:20:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:07:29.714 01:20:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:29.714 01:20:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:29.714 01:20:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:29.714 01:20:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:29.714 01:20:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:29.714 01:20:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=cnsnaob9wqir7h4ul8slugpt6pr4z42xu6zhuwdlvd5aldcembuh8q2ufmi072srd4d7cxxnfjnuri6grfpla5vzqsteo4gbvk2nedchi2ycn22xrh3vmd60u952aa0kyqryui4q6hsnbmjs2lzk8baadjy9gtkalpxdp8eslc2f834e7g9wf4eipa4xgrzyt2q8syzrp9fdxbvld5k940e673dny0epnzapi10jjyxfb2xc5iv8kx82sva01mh2bgf3fkojrwta0spdgkwk7rg27kkahot0hvil2wuae3c8s1yz8q02o84rowb1dqb1ckbk0fcolr4rzyl0hfpghqiko3jl7chjx0pttyn32v86j1wtqjdcxcpd23c61lxgn8zfemipyotuyozxvk8zdhigjftbd3156kyp19n09r8ezz1792ro5klfztdx2anu0e52wt0l9xaaa9kvo0ggwl2achvunmlaszp23bo3w84zoasjdlb9mnkfpngst556x9sowvhm7gcjjjm8upy8236c2yy6h3n6go7og2l0mdmjymimqt2bm224a3xscnege5weo8hx372dnv8uwhpwigegk9bss9z1lok87n6dcc4kc91j7pxc9dsyr15rylku0k2q2um023jxs6nwe3xyb7k5p7j1lsflrlivuq27yiucjp9f9wahkse39t5c21jqk6dgt9iuxuhpx2jjrrghdfqq0h6pws6hber43f4pv460k2mhwh30ar7ln0eev0h8a0i2vlu9eb76zvxvmxe2b9tj5axgyo1m2ju1qmj67nvc1aie7k1xov1beaa25n771q4cbiffuso3ispqwtmdq17iiufyiigcnnckz9vd03qru0hifleivbcweqs49g73oufye83y7h45746zvwldlbi82kzy0jmzp8z2s1fiwmzj4jfsiinxt73q6tlxwz57g7ohlgbooag6jm7mu0tdwl48t16t55yxjfhwgez3mawa7y86tetbb8agmouyju45ystraybemskcdwh6sga7j5tbdj25mp4a7tryf4uzaoiq0xzpxp99t7mxs3ngln2tz21xxvzj0zsz5gp2zrs32l8zitwvwi831nixn88dgteh0lb6tww3li9akdwh2xcj5hd4hr3qy9zawkdyytxzfj21yxu9v99e8ie326n19vhovj83aio0j1b5z7yoxk49bprbjwknp0lue8bmrq0u6l7fzimj4j5rmkmed1y5dyzgnvycpswrxwlnp3nbk8n18kc3q7asnifjjs9rdb5x9ncc4o8xmfyklpy5ggpeg9tbd0is2glj140y5ya2a7iyx0uui63aabab79o4og02n5ccxzio2d5lsx15mvlkx6ykmegbaer354uraw316cornmflekxfuezvykjpiysubmnxfhmqln868pgayww48v881fx0k1iqb197te4hlwixglaafmky6tfo04obac3v37idfkhu7g8b7jfv1ydb2nm91iuajp8807dbwi06aikrj5tyxqaybmrc801s813voyvwbx39uympcbyuli1x9mlzr57y95tak8n68yl14gp7fej3r49nzryu2rbutf84ce312nsncb5fqvlevsrebu5hqtnxo4kgsb28fhd8yvsig11oeeo9uirgo8oqr2b20proh2w35ziek8m8kzstmb5mhq67o1nob0rr5l411p6xsz84e62x4vylw0zsbkt0slr53tdxuuam6bwfmr0jti2xo09nlhptbm160lxhwcrohqee74t15nc3xh90xqt7m4o9nb3lsldkc47nww73qfdkvfsud2iark8yhq2ji8aze9qj2izwbj5mt1yj4ga7lnyx2j47aopiyge24dqp75mbw52r8pc9f9sjh2a7fjgtwy2cj82p9672rhvnr2ncxpyaz4afsbqhj53a064ckluzmfm9o7cc7nz7nfdjxcd653sme4wk9hx7gn2999isga056bg8f9483ea9xtjn7im2ijrp7thvbqeukpuy4ycsoqf1mghjruzczelkog62pyt9o2wxvee8qcy5m5fyu44vm01hdqpsp02zt8cbwugpb9x2z7o6uhtq3hqqjwmp6i3ozkyiuj6jnhdl4mtyj6roh6e0xdo1txh93zwk4nzf509m4kw0why8b8mm53ydw37r4bl6len3sru6nwbf7s7fg3nf7ucbu35hgpm4h47eyylalf6u16viu9l48diaw7v6hrwll8pu211fq9hhrsehjxxy4zug0205ptii9h2h6kl31vi38k4dremh9dbbcg2ygfbdqmjt1k44rsttwqv4ojcacbehlqc0vhq67eg4pavd5kwajjvqmt1d9qzzr3smiu5n5ij5tflc97kiit2h6uot0wyzwud47m1kfq6qmtb7rpgyfwdfpmob1pj9nz1zm6qmqfzzrllcg6dslgyki1zdv1awepr9ztf4ldz30135ewuondo64t84jhg07qzzykemm9du6hh3bd2hbsggru6t2utgukr0wtr3g4x3knwksltapki563emtyscy3dcz860o5cebds21t6m17bckpdji5d3s3uosrjebygq3if3v78tvil1yik2blhq4yzwnz0o9p9a0r7vktxal7kpzmxjx7uxhrgvajqzaxta4heb0glzyu0ft90jndf5qsg5clj1vqut7mv6pwpl2ta1ibrh8p9m6chvqqyi7nd32nih1h71gju6bz9wtzyfnufnkggaf2mqx9n97jxwzd801svkyl9knjbcdcrj7jyl5y7fn9s231noio6o1mqxgxxow01n9ng4k4hgi3jh9ra4odgr794nwsdjqyaejdii54wsgcxgdai00qorjqojuwuy3uocnhf9x248fy0t8u8ag3k9r757yxonqfjus98asm5sqc89tcchziq22zyiye7fl04ujtu48h8x71wtnicdg1t95wwa3b7egzycazmt2skgh2yain3jw2mfvsbsezw4j228m4l08g3tbamsky75tq8vqwfvpd4zzmmd1x30vjiszjjcgbbsjo4j2o0blpqk7no72o54yw4lm6pm8206pnyw9xjfhr5hg2d2opkt61em56syzyo6ac5cfnk7ak6trhiax3m82kp1xopnqzwkm147ijqr22b84gnphu0iit38ct9fwakjox0k29njyefn0psbu91p0icqjxur0fe991gt4fi55jphth68u3zoitauc1iqpmczgmkcq9isfp0i7dmikgod0prmgy2k906ouj7jtklj59bxhm94e4fv5vrr4vnh7ed6lsz6wj80tys1j42l5p7byuycsa61xwy323tmyjg9rihg8kn93qrfisz5tkuvaycqy6dtwgxps396nw8gglyc1qm99l10dgtyximtyc4x5jn6cbmrko6yq7wtgrsgp6y4rzhqcj2uxutjy7k5onu7706o9aa6hmczsn4n4iemzf0nldg9s9l9p67okx1j866p479wx14ous4w7j3rxofb5hny5f54qafo41a4u7e20ewvynej9y5t4kk6i199o005wdzbdf0jum7nc5kw4gf85s3wdmaf73clh92xz0810mkwzxf84tz773c8mwo8vkmz8gkfhnc674cpljpto784axsp32s4hx30mp0gmu4os0qvge9j80mwxmd8nez4o1nxi84qtv5cj9u9mzgmpivkxw5l651t6d9j0a13jjcensq1hfc7v643zhuffbzgv0l0taqnddd7j07jpxu4z2rj2w41lm5re2za38qzw3suy39beqf78s7mb8sm8kvapumng2iytlmekdmdy8qdhcv1b4gnzmyuoa1ejntueuk8m2vvbm4irvz80eg6fo39bz9w87h4ajn881ua10j6e43r6ni7i7bmysj8ujykrv72kxzcp69kfuljdn2xl0eyj1lrmi9pjkish4kg4kmign2crdj9k9i748gd4vs0s4iyksku61floxkqmton5ip5sbbsctxwd2s8gt3f4eg9p18dg3eaiol4vcmhi5kkcjac3g0zim2jeq08 00:07:29.714 01:20:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:29.714 01:20:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:29.714 01:20:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:29.714 01:20:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:29.714 { 00:07:29.714 "subsystems": [ 00:07:29.714 { 00:07:29.715 "subsystem": "bdev", 00:07:29.715 "config": [ 00:07:29.715 { 00:07:29.715 "params": { 00:07:29.715 "trtype": "pcie", 00:07:29.715 "traddr": "0000:00:10.0", 00:07:29.715 "name": "Nvme0" 00:07:29.715 }, 00:07:29.715 "method": "bdev_nvme_attach_controller" 00:07:29.715 }, 00:07:29.715 { 00:07:29.715 "method": "bdev_wait_for_examine" 00:07:29.715 } 00:07:29.715 ] 00:07:29.715 } 00:07:29.715 ] 00:07:29.715 } 00:07:29.715 [2024-09-28 01:20:25.624241] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:29.715 [2024-09-28 01:20:25.624416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61520 ] 00:07:29.974 [2024-09-28 01:20:25.793973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.232 [2024-09-28 01:20:25.953700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.232 [2024-09-28 01:20:26.113963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.427  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:31.427 00:07:31.427 01:20:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:31.427 01:20:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:31.427 01:20:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:31.427 01:20:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:31.427 { 00:07:31.427 "subsystems": [ 00:07:31.427 { 00:07:31.427 "subsystem": "bdev", 00:07:31.427 "config": [ 00:07:31.427 { 00:07:31.427 "params": { 00:07:31.427 "trtype": "pcie", 00:07:31.427 "traddr": "0000:00:10.0", 00:07:31.427 "name": "Nvme0" 00:07:31.427 }, 00:07:31.427 "method": "bdev_nvme_attach_controller" 00:07:31.427 }, 00:07:31.427 { 00:07:31.427 "method": "bdev_wait_for_examine" 00:07:31.427 } 00:07:31.427 ] 00:07:31.427 } 00:07:31.427 ] 00:07:31.427 } 00:07:31.686 [2024-09-28 01:20:27.400723] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:31.686 [2024-09-28 01:20:27.400894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61546 ] 00:07:31.686 [2024-09-28 01:20:27.566256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.945 [2024-09-28 01:20:27.734558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.204 [2024-09-28 01:20:27.893252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.141  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:33.141 00:07:33.141 01:20:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:33.142 01:20:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ cnsnaob9wqir7h4ul8slugpt6pr4z42xu6zhuwdlvd5aldcembuh8q2ufmi072srd4d7cxxnfjnuri6grfpla5vzqsteo4gbvk2nedchi2ycn22xrh3vmd60u952aa0kyqryui4q6hsnbmjs2lzk8baadjy9gtkalpxdp8eslc2f834e7g9wf4eipa4xgrzyt2q8syzrp9fdxbvld5k940e673dny0epnzapi10jjyxfb2xc5iv8kx82sva01mh2bgf3fkojrwta0spdgkwk7rg27kkahot0hvil2wuae3c8s1yz8q02o84rowb1dqb1ckbk0fcolr4rzyl0hfpghqiko3jl7chjx0pttyn32v86j1wtqjdcxcpd23c61lxgn8zfemipyotuyozxvk8zdhigjftbd3156kyp19n09r8ezz1792ro5klfztdx2anu0e52wt0l9xaaa9kvo0ggwl2achvunmlaszp23bo3w84zoasjdlb9mnkfpngst556x9sowvhm7gcjjjm8upy8236c2yy6h3n6go7og2l0mdmjymimqt2bm224a3xscnege5weo8hx372dnv8uwhpwigegk9bss9z1lok87n6dcc4kc91j7pxc9dsyr15rylku0k2q2um023jxs6nwe3xyb7k5p7j1lsflrlivuq27yiucjp9f9wahkse39t5c21jqk6dgt9iuxuhpx2jjrrghdfqq0h6pws6hber43f4pv460k2mhwh30ar7ln0eev0h8a0i2vlu9eb76zvxvmxe2b9tj5axgyo1m2ju1qmj67nvc1aie7k1xov1beaa25n771q4cbiffuso3ispqwtmdq17iiufyiigcnnckz9vd03qru0hifleivbcweqs49g73oufye83y7h45746zvwldlbi82kzy0jmzp8z2s1fiwmzj4jfsiinxt73q6tlxwz57g7ohlgbooag6jm7mu0tdwl48t16t55yxjfhwgez3mawa7y86tetbb8agmouyju45ystraybemskcdwh6sga7j5tbdj25mp4a7tryf4uzaoiq0xzpxp99t7mxs3ngln2tz21xxvzj0zsz5gp2zrs32l8zitwvwi831nixn88dgteh0lb6tww3li9akdwh2xcj5hd4hr3qy9zawkdyytxzfj21yxu9v99e8ie326n19vhovj83aio0j1b5z7yoxk49bprbjwknp0lue8bmrq0u6l7fzimj4j5rmkmed1y5dyzgnvycpswrxwlnp3nbk8n18kc3q7asnifjjs9rdb5x9ncc4o8xmfyklpy5ggpeg9tbd0is2glj140y5ya2a7iyx0uui63aabab79o4og02n5ccxzio2d5lsx15mvlkx6ykmegbaer354uraw316cornmflekxfuezvykjpiysubmnxfhmqln868pgayww48v881fx0k1iqb197te4hlwixglaafmky6tfo04obac3v37idfkhu7g8b7jfv1ydb2nm91iuajp8807dbwi06aikrj5tyxqaybmrc801s813voyvwbx39uympcbyuli1x9mlzr57y95tak8n68yl14gp7fej3r49nzryu2rbutf84ce312nsncb5fqvlevsrebu5hqtnxo4kgsb28fhd8yvsig11oeeo9uirgo8oqr2b20proh2w35ziek8m8kzstmb5mhq67o1nob0rr5l411p6xsz84e62x4vylw0zsbkt0slr53tdxuuam6bwfmr0jti2xo09nlhptbm160lxhwcrohqee74t15nc3xh90xqt7m4o9nb3lsldkc47nww73qfdkvfsud2iark8yhq2ji8aze9qj2izwbj5mt1yj4ga7lnyx2j47aopiyge24dqp75mbw52r8pc9f9sjh2a7fjgtwy2cj82p9672rhvnr2ncxpyaz4afsbqhj53a064ckluzmfm9o7cc7nz7nfdjxcd653sme4wk9hx7gn2999isga056bg8f9483ea9xtjn7im2ijrp7thvbqeukpuy4ycsoqf1mghjruzczelkog62pyt9o2wxvee8qcy5m5fyu44vm01hdqpsp02zt8cbwugpb9x2z7o6uhtq3hqqjwmp6i3ozkyiuj6jnhdl4mtyj6roh6e0xdo1txh93zwk4nzf509m4kw0why8b8mm53ydw37r4bl6len3sru6nwbf7s7fg3nf7ucbu35hgpm4h47eyylalf6u16viu9l48diaw7v6hrwll8pu211fq9hhrsehjxxy4zug0205ptii9h2h6kl31vi38k4dremh9dbbcg2ygfbdqmjt1k44rsttwqv4ojcacbehlqc0vhq67eg4pavd5kwajjvqmt1d9qzzr3smiu5n5ij5tflc97kiit2h6uot0wyzwud47m1kfq6qmtb7rpgyfwdfpmob1pj9nz1zm6qmqfzzrllcg6dslgyki1zdv1awepr9ztf4ldz30135ewuondo64t84jhg07qzzykemm9du6hh3bd2hbsggru6t2utgukr0wtr3g4x3knwksltapki563emtyscy3dcz860o5cebds21t6m17bckpdji5d3s3uosrjebygq3if3v78tvil1yik2blhq4yzwnz0o9p9a0r7vktxal7kpzmxjx7uxhrgvajqzaxta4heb0glzyu0ft90jndf5qsg5clj1vqut7mv6pwpl2ta1ibrh8p9m6chvqqyi7nd32nih1h71gju6bz9wtzyfnufnkggaf2mqx9n97jxwzd801svkyl9knjbcdcrj7jyl5y7fn9s231noio6o1mqxgxxow01n9ng4k4hgi3jh9ra4odgr794nwsdjqyaejdii54wsgcxgdai00qorjqojuwuy3uocnhf9x248fy0t8u8ag3k9r757yxonqfjus98asm5sqc89tcchziq22zyiye7fl04ujtu48h8x71wtnicdg1t95wwa3b7egzycazmt2skgh2yain3jw2mfvsbsezw4j228m4l08g3tbamsky75tq8vqwfvpd4zzmmd1x30vjiszjjcgbbsjo4j2o0blpqk7no72o54yw4lm6pm8206pnyw9xjfhr5hg2d2opkt61em56syzyo6ac5cfnk7ak6trhiax3m82kp1xopnqzwkm147ijqr22b84gnphu0iit38ct9fwakjox0k29njyefn0psbu91p0icqjxur0fe991gt4fi55jphth68u3zoitauc1iqpmczgmkcq9isfp0i7dmikgod0prmgy2k906ouj7jtklj59bxhm94e4fv5vrr4vnh7ed6lsz6wj80tys1j42l5p7byuycsa61xwy323tmyjg9rihg8kn93qrfisz5tkuvaycqy6dtwgxps396nw8gglyc1qm99l10dgtyximtyc4x5jn6cbmrko6yq7wtgrsgp6y4rzhqcj2uxutjy7k5onu7706o9aa6hmczsn4n4iemzf0nldg9s9l9p67okx1j866p479wx14ous4w7j3rxofb5hny5f54qafo41a4u7e20ewvynej9y5t4kk6i199o005wdzbdf0jum7nc5kw4gf85s3wdmaf73clh92xz0810mkwzxf84tz773c8mwo8vkmz8gkfhnc674cpljpto784axsp32s4hx30mp0gmu4os0qvge9j80mwxmd8nez4o1nxi84qtv5cj9u9mzgmpivkxw5l651t6d9j0a13jjcensq1hfc7v643zhuffbzgv0l0taqnddd7j07jpxu4z2rj2w41lm5re2za38qzw3suy39beqf78s7mb8sm8kvapumng2iytlmekdmdy8qdhcv1b4gnzmyuoa1ejntueuk8m2vvbm4irvz80eg6fo39bz9w87h4ajn881ua10j6e43r6ni7i7bmysj8ujykrv72kxzcp69kfuljdn2xl0eyj1lrmi9pjkish4kg4kmign2crdj9k9i748gd4vs0s4iyksku61floxkqmton5ip5sbbsctxwd2s8gt3f4eg9p18dg3eaiol4vcmhi5kkcjac3g0zim2jeq08 == \c\n\s\n\a\o\b\9\w\q\i\r\7\h\4\u\l\8\s\l\u\g\p\t\6\p\r\4\z\4\2\x\u\6\z\h\u\w\d\l\v\d\5\a\l\d\c\e\m\b\u\h\8\q\2\u\f\m\i\0\7\2\s\r\d\4\d\7\c\x\x\n\f\j\n\u\r\i\6\g\r\f\p\l\a\5\v\z\q\s\t\e\o\4\g\b\v\k\2\n\e\d\c\h\i\2\y\c\n\2\2\x\r\h\3\v\m\d\6\0\u\9\5\2\a\a\0\k\y\q\r\y\u\i\4\q\6\h\s\n\b\m\j\s\2\l\z\k\8\b\a\a\d\j\y\9\g\t\k\a\l\p\x\d\p\8\e\s\l\c\2\f\8\3\4\e\7\g\9\w\f\4\e\i\p\a\4\x\g\r\z\y\t\2\q\8\s\y\z\r\p\9\f\d\x\b\v\l\d\5\k\9\4\0\e\6\7\3\d\n\y\0\e\p\n\z\a\p\i\1\0\j\j\y\x\f\b\2\x\c\5\i\v\8\k\x\8\2\s\v\a\0\1\m\h\2\b\g\f\3\f\k\o\j\r\w\t\a\0\s\p\d\g\k\w\k\7\r\g\2\7\k\k\a\h\o\t\0\h\v\i\l\2\w\u\a\e\3\c\8\s\1\y\z\8\q\0\2\o\8\4\r\o\w\b\1\d\q\b\1\c\k\b\k\0\f\c\o\l\r\4\r\z\y\l\0\h\f\p\g\h\q\i\k\o\3\j\l\7\c\h\j\x\0\p\t\t\y\n\3\2\v\8\6\j\1\w\t\q\j\d\c\x\c\p\d\2\3\c\6\1\l\x\g\n\8\z\f\e\m\i\p\y\o\t\u\y\o\z\x\v\k\8\z\d\h\i\g\j\f\t\b\d\3\1\5\6\k\y\p\1\9\n\0\9\r\8\e\z\z\1\7\9\2\r\o\5\k\l\f\z\t\d\x\2\a\n\u\0\e\5\2\w\t\0\l\9\x\a\a\a\9\k\v\o\0\g\g\w\l\2\a\c\h\v\u\n\m\l\a\s\z\p\2\3\b\o\3\w\8\4\z\o\a\s\j\d\l\b\9\m\n\k\f\p\n\g\s\t\5\5\6\x\9\s\o\w\v\h\m\7\g\c\j\j\j\m\8\u\p\y\8\2\3\6\c\2\y\y\6\h\3\n\6\g\o\7\o\g\2\l\0\m\d\m\j\y\m\i\m\q\t\2\b\m\2\2\4\a\3\x\s\c\n\e\g\e\5\w\e\o\8\h\x\3\7\2\d\n\v\8\u\w\h\p\w\i\g\e\g\k\9\b\s\s\9\z\1\l\o\k\8\7\n\6\d\c\c\4\k\c\9\1\j\7\p\x\c\9\d\s\y\r\1\5\r\y\l\k\u\0\k\2\q\2\u\m\0\2\3\j\x\s\6\n\w\e\3\x\y\b\7\k\5\p\7\j\1\l\s\f\l\r\l\i\v\u\q\2\7\y\i\u\c\j\p\9\f\9\w\a\h\k\s\e\3\9\t\5\c\2\1\j\q\k\6\d\g\t\9\i\u\x\u\h\p\x\2\j\j\r\r\g\h\d\f\q\q\0\h\6\p\w\s\6\h\b\e\r\4\3\f\4\p\v\4\6\0\k\2\m\h\w\h\3\0\a\r\7\l\n\0\e\e\v\0\h\8\a\0\i\2\v\l\u\9\e\b\7\6\z\v\x\v\m\x\e\2\b\9\t\j\5\a\x\g\y\o\1\m\2\j\u\1\q\m\j\6\7\n\v\c\1\a\i\e\7\k\1\x\o\v\1\b\e\a\a\2\5\n\7\7\1\q\4\c\b\i\f\f\u\s\o\3\i\s\p\q\w\t\m\d\q\1\7\i\i\u\f\y\i\i\g\c\n\n\c\k\z\9\v\d\0\3\q\r\u\0\h\i\f\l\e\i\v\b\c\w\e\q\s\4\9\g\7\3\o\u\f\y\e\8\3\y\7\h\4\5\7\4\6\z\v\w\l\d\l\b\i\8\2\k\z\y\0\j\m\z\p\8\z\2\s\1\f\i\w\m\z\j\4\j\f\s\i\i\n\x\t\7\3\q\6\t\l\x\w\z\5\7\g\7\o\h\l\g\b\o\o\a\g\6\j\m\7\m\u\0\t\d\w\l\4\8\t\1\6\t\5\5\y\x\j\f\h\w\g\e\z\3\m\a\w\a\7\y\8\6\t\e\t\b\b\8\a\g\m\o\u\y\j\u\4\5\y\s\t\r\a\y\b\e\m\s\k\c\d\w\h\6\s\g\a\7\j\5\t\b\d\j\2\5\m\p\4\a\7\t\r\y\f\4\u\z\a\o\i\q\0\x\z\p\x\p\9\9\t\7\m\x\s\3\n\g\l\n\2\t\z\2\1\x\x\v\z\j\0\z\s\z\5\g\p\2\z\r\s\3\2\l\8\z\i\t\w\v\w\i\8\3\1\n\i\x\n\8\8\d\g\t\e\h\0\l\b\6\t\w\w\3\l\i\9\a\k\d\w\h\2\x\c\j\5\h\d\4\h\r\3\q\y\9\z\a\w\k\d\y\y\t\x\z\f\j\2\1\y\x\u\9\v\9\9\e\8\i\e\3\2\6\n\1\9\v\h\o\v\j\8\3\a\i\o\0\j\1\b\5\z\7\y\o\x\k\4\9\b\p\r\b\j\w\k\n\p\0\l\u\e\8\b\m\r\q\0\u\6\l\7\f\z\i\m\j\4\j\5\r\m\k\m\e\d\1\y\5\d\y\z\g\n\v\y\c\p\s\w\r\x\w\l\n\p\3\n\b\k\8\n\1\8\k\c\3\q\7\a\s\n\i\f\j\j\s\9\r\d\b\5\x\9\n\c\c\4\o\8\x\m\f\y\k\l\p\y\5\g\g\p\e\g\9\t\b\d\0\i\s\2\g\l\j\1\4\0\y\5\y\a\2\a\7\i\y\x\0\u\u\i\6\3\a\a\b\a\b\7\9\o\4\o\g\0\2\n\5\c\c\x\z\i\o\2\d\5\l\s\x\1\5\m\v\l\k\x\6\y\k\m\e\g\b\a\e\r\3\5\4\u\r\a\w\3\1\6\c\o\r\n\m\f\l\e\k\x\f\u\e\z\v\y\k\j\p\i\y\s\u\b\m\n\x\f\h\m\q\l\n\8\6\8\p\g\a\y\w\w\4\8\v\8\8\1\f\x\0\k\1\i\q\b\1\9\7\t\e\4\h\l\w\i\x\g\l\a\a\f\m\k\y\6\t\f\o\0\4\o\b\a\c\3\v\3\7\i\d\f\k\h\u\7\g\8\b\7\j\f\v\1\y\d\b\2\n\m\9\1\i\u\a\j\p\8\8\0\7\d\b\w\i\0\6\a\i\k\r\j\5\t\y\x\q\a\y\b\m\r\c\8\0\1\s\8\1\3\v\o\y\v\w\b\x\3\9\u\y\m\p\c\b\y\u\l\i\1\x\9\m\l\z\r\5\7\y\9\5\t\a\k\8\n\6\8\y\l\1\4\g\p\7\f\e\j\3\r\4\9\n\z\r\y\u\2\r\b\u\t\f\8\4\c\e\3\1\2\n\s\n\c\b\5\f\q\v\l\e\v\s\r\e\b\u\5\h\q\t\n\x\o\4\k\g\s\b\2\8\f\h\d\8\y\v\s\i\g\1\1\o\e\e\o\9\u\i\r\g\o\8\o\q\r\2\b\2\0\p\r\o\h\2\w\3\5\z\i\e\k\8\m\8\k\z\s\t\m\b\5\m\h\q\6\7\o\1\n\o\b\0\r\r\5\l\4\1\1\p\6\x\s\z\8\4\e\6\2\x\4\v\y\l\w\0\z\s\b\k\t\0\s\l\r\5\3\t\d\x\u\u\a\m\6\b\w\f\m\r\0\j\t\i\2\x\o\0\9\n\l\h\p\t\b\m\1\6\0\l\x\h\w\c\r\o\h\q\e\e\7\4\t\1\5\n\c\3\x\h\9\0\x\q\t\7\m\4\o\9\n\b\3\l\s\l\d\k\c\4\7\n\w\w\7\3\q\f\d\k\v\f\s\u\d\2\i\a\r\k\8\y\h\q\2\j\i\8\a\z\e\9\q\j\2\i\z\w\b\j\5\m\t\1\y\j\4\g\a\7\l\n\y\x\2\j\4\7\a\o\p\i\y\g\e\2\4\d\q\p\7\5\m\b\w\5\2\r\8\p\c\9\f\9\s\j\h\2\a\7\f\j\g\t\w\y\2\c\j\8\2\p\9\6\7\2\r\h\v\n\r\2\n\c\x\p\y\a\z\4\a\f\s\b\q\h\j\5\3\a\0\6\4\c\k\l\u\z\m\f\m\9\o\7\c\c\7\n\z\7\n\f\d\j\x\c\d\6\5\3\s\m\e\4\w\k\9\h\x\7\g\n\2\9\9\9\i\s\g\a\0\5\6\b\g\8\f\9\4\8\3\e\a\9\x\t\j\n\7\i\m\2\i\j\r\p\7\t\h\v\b\q\e\u\k\p\u\y\4\y\c\s\o\q\f\1\m\g\h\j\r\u\z\c\z\e\l\k\o\g\6\2\p\y\t\9\o\2\w\x\v\e\e\8\q\c\y\5\m\5\f\y\u\4\4\v\m\0\1\h\d\q\p\s\p\0\2\z\t\8\c\b\w\u\g\p\b\9\x\2\z\7\o\6\u\h\t\q\3\h\q\q\j\w\m\p\6\i\3\o\z\k\y\i\u\j\6\j\n\h\d\l\4\m\t\y\j\6\r\o\h\6\e\0\x\d\o\1\t\x\h\9\3\z\w\k\4\n\z\f\5\0\9\m\4\k\w\0\w\h\y\8\b\8\m\m\5\3\y\d\w\3\7\r\4\b\l\6\l\e\n\3\s\r\u\6\n\w\b\f\7\s\7\f\g\3\n\f\7\u\c\b\u\3\5\h\g\p\m\4\h\4\7\e\y\y\l\a\l\f\6\u\1\6\v\i\u\9\l\4\8\d\i\a\w\7\v\6\h\r\w\l\l\8\p\u\2\1\1\f\q\9\h\h\r\s\e\h\j\x\x\y\4\z\u\g\0\2\0\5\p\t\i\i\9\h\2\h\6\k\l\3\1\v\i\3\8\k\4\d\r\e\m\h\9\d\b\b\c\g\2\y\g\f\b\d\q\m\j\t\1\k\4\4\r\s\t\t\w\q\v\4\o\j\c\a\c\b\e\h\l\q\c\0\v\h\q\6\7\e\g\4\p\a\v\d\5\k\w\a\j\j\v\q\m\t\1\d\9\q\z\z\r\3\s\m\i\u\5\n\5\i\j\5\t\f\l\c\9\7\k\i\i\t\2\h\6\u\o\t\0\w\y\z\w\u\d\4\7\m\1\k\f\q\6\q\m\t\b\7\r\p\g\y\f\w\d\f\p\m\o\b\1\p\j\9\n\z\1\z\m\6\q\m\q\f\z\z\r\l\l\c\g\6\d\s\l\g\y\k\i\1\z\d\v\1\a\w\e\p\r\9\z\t\f\4\l\d\z\3\0\1\3\5\e\w\u\o\n\d\o\6\4\t\8\4\j\h\g\0\7\q\z\z\y\k\e\m\m\9\d\u\6\h\h\3\b\d\2\h\b\s\g\g\r\u\6\t\2\u\t\g\u\k\r\0\w\t\r\3\g\4\x\3\k\n\w\k\s\l\t\a\p\k\i\5\6\3\e\m\t\y\s\c\y\3\d\c\z\8\6\0\o\5\c\e\b\d\s\2\1\t\6\m\1\7\b\c\k\p\d\j\i\5\d\3\s\3\u\o\s\r\j\e\b\y\g\q\3\i\f\3\v\7\8\t\v\i\l\1\y\i\k\2\b\l\h\q\4\y\z\w\n\z\0\o\9\p\9\a\0\r\7\v\k\t\x\a\l\7\k\p\z\m\x\j\x\7\u\x\h\r\g\v\a\j\q\z\a\x\t\a\4\h\e\b\0\g\l\z\y\u\0\f\t\9\0\j\n\d\f\5\q\s\g\5\c\l\j\1\v\q\u\t\7\m\v\6\p\w\p\l\2\t\a\1\i\b\r\h\8\p\9\m\6\c\h\v\q\q\y\i\7\n\d\3\2\n\i\h\1\h\7\1\g\j\u\6\b\z\9\w\t\z\y\f\n\u\f\n\k\g\g\a\f\2\m\q\x\9\n\9\7\j\x\w\z\d\8\0\1\s\v\k\y\l\9\k\n\j\b\c\d\c\r\j\7\j\y\l\5\y\7\f\n\9\s\2\3\1\n\o\i\o\6\o\1\m\q\x\g\x\x\o\w\0\1\n\9\n\g\4\k\4\h\g\i\3\j\h\9\r\a\4\o\d\g\r\7\9\4\n\w\s\d\j\q\y\a\e\j\d\i\i\5\4\w\s\g\c\x\g\d\a\i\0\0\q\o\r\j\q\o\j\u\w\u\y\3\u\o\c\n\h\f\9\x\2\4\8\f\y\0\t\8\u\8\a\g\3\k\9\r\7\5\7\y\x\o\n\q\f\j\u\s\9\8\a\s\m\5\s\q\c\8\9\t\c\c\h\z\i\q\2\2\z\y\i\y\e\7\f\l\0\4\u\j\t\u\4\8\h\8\x\7\1\w\t\n\i\c\d\g\1\t\9\5\w\w\a\3\b\7\e\g\z\y\c\a\z\m\t\2\s\k\g\h\2\y\a\i\n\3\j\w\2\m\f\v\s\b\s\e\z\w\4\j\2\2\8\m\4\l\0\8\g\3\t\b\a\m\s\k\y\7\5\t\q\8\v\q\w\f\v\p\d\4\z\z\m\m\d\1\x\3\0\v\j\i\s\z\j\j\c\g\b\b\s\j\o\4\j\2\o\0\b\l\p\q\k\7\n\o\7\2\o\5\4\y\w\4\l\m\6\p\m\8\2\0\6\p\n\y\w\9\x\j\f\h\r\5\h\g\2\d\2\o\p\k\t\6\1\e\m\5\6\s\y\z\y\o\6\a\c\5\c\f\n\k\7\a\k\6\t\r\h\i\a\x\3\m\8\2\k\p\1\x\o\p\n\q\z\w\k\m\1\4\7\i\j\q\r\2\2\b\8\4\g\n\p\h\u\0\i\i\t\3\8\c\t\9\f\w\a\k\j\o\x\0\k\2\9\n\j\y\e\f\n\0\p\s\b\u\9\1\p\0\i\c\q\j\x\u\r\0\f\e\9\9\1\g\t\4\f\i\5\5\j\p\h\t\h\6\8\u\3\z\o\i\t\a\u\c\1\i\q\p\m\c\z\g\m\k\c\q\9\i\s\f\p\0\i\7\d\m\i\k\g\o\d\0\p\r\m\g\y\2\k\9\0\6\o\u\j\7\j\t\k\l\j\5\9\b\x\h\m\9\4\e\4\f\v\5\v\r\r\4\v\n\h\7\e\d\6\l\s\z\6\w\j\8\0\t\y\s\1\j\4\2\l\5\p\7\b\y\u\y\c\s\a\6\1\x\w\y\3\2\3\t\m\y\j\g\9\r\i\h\g\8\k\n\9\3\q\r\f\i\s\z\5\t\k\u\v\a\y\c\q\y\6\d\t\w\g\x\p\s\3\9\6\n\w\8\g\g\l\y\c\1\q\m\9\9\l\1\0\d\g\t\y\x\i\m\t\y\c\4\x\5\j\n\6\c\b\m\r\k\o\6\y\q\7\w\t\g\r\s\g\p\6\y\4\r\z\h\q\c\j\2\u\x\u\t\j\y\7\k\5\o\n\u\7\7\0\6\o\9\a\a\6\h\m\c\z\s\n\4\n\4\i\e\m\z\f\0\n\l\d\g\9\s\9\l\9\p\6\7\o\k\x\1\j\8\6\6\p\4\7\9\w\x\1\4\o\u\s\4\w\7\j\3\r\x\o\f\b\5\h\n\y\5\f\5\4\q\a\f\o\4\1\a\4\u\7\e\2\0\e\w\v\y\n\e\j\9\y\5\t\4\k\k\6\i\1\9\9\o\0\0\5\w\d\z\b\d\f\0\j\u\m\7\n\c\5\k\w\4\g\f\8\5\s\3\w\d\m\a\f\7\3\c\l\h\9\2\x\z\0\8\1\0\m\k\w\z\x\f\8\4\t\z\7\7\3\c\8\m\w\o\8\v\k\m\z\8\g\k\f\h\n\c\6\7\4\c\p\l\j\p\t\o\7\8\4\a\x\s\p\3\2\s\4\h\x\3\0\m\p\0\g\m\u\4\o\s\0\q\v\g\e\9\j\8\0\m\w\x\m\d\8\n\e\z\4\o\1\n\x\i\8\4\q\t\v\5\c\j\9\u\9\m\z\g\m\p\i\v\k\x\w\5\l\6\5\1\t\6\d\9\j\0\a\1\3\j\j\c\e\n\s\q\1\h\f\c\7\v\6\4\3\z\h\u\f\f\b\z\g\v\0\l\0\t\a\q\n\d\d\d\7\j\0\7\j\p\x\u\4\z\2\r\j\2\w\4\1\l\m\5\r\e\2\z\a\3\8\q\z\w\3\s\u\y\3\9\b\e\q\f\7\8\s\7\m\b\8\s\m\8\k\v\a\p\u\m\n\g\2\i\y\t\l\m\e\k\d\m\d\y\8\q\d\h\c\v\1\b\4\g\n\z\m\y\u\o\a\1\e\j\n\t\u\e\u\k\8\m\2\v\v\b\m\4\i\r\v\z\8\0\e\g\6\f\o\3\9\b\z\9\w\8\7\h\4\a\j\n\8\8\1\u\a\1\0\j\6\e\4\3\r\6\n\i\7\i\7\b\m\y\s\j\8\u\j\y\k\r\v\7\2\k\x\z\c\p\6\9\k\f\u\l\j\d\n\2\x\l\0\e\y\j\1\l\r\m\i\9\p\j\k\i\s\h\4\k\g\4\k\m\i\g\n\2\c\r\d\j\9\k\9\i\7\4\8\g\d\4\v\s\0\s\4\i\y\k\s\k\u\6\1\f\l\o\x\k\q\m\t\o\n\5\i\p\5\s\b\b\s\c\t\x\w\d\2\s\8\g\t\3\f\4\e\g\9\p\1\8\d\g\3\e\a\i\o\l\4\v\c\m\h\i\5\k\k\c\j\a\c\3\g\0\z\i\m\2\j\e\q\0\8 ]] 00:07:33.142 ************************************ 00:07:33.142 END TEST dd_rw_offset 00:07:33.142 ************************************ 00:07:33.142 00:07:33.142 real 0m3.528s 00:07:33.142 user 0m3.002s 00:07:33.142 sys 0m1.630s 00:07:33.142 01:20:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.142 01:20:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:33.142 01:20:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:33.142 01:20:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:33.142 01:20:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:33.142 01:20:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:33.142 01:20:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:33.142 01:20:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:33.142 01:20:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:33.142 01:20:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:33.142 01:20:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:33.142 01:20:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:33.142 01:20:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:33.401 { 00:07:33.401 "subsystems": [ 00:07:33.401 { 00:07:33.401 "subsystem": "bdev", 00:07:33.401 "config": [ 00:07:33.401 { 00:07:33.401 "params": { 00:07:33.401 "trtype": "pcie", 00:07:33.401 "traddr": "0000:00:10.0", 00:07:33.401 "name": "Nvme0" 00:07:33.401 }, 00:07:33.401 "method": "bdev_nvme_attach_controller" 00:07:33.401 }, 00:07:33.401 { 00:07:33.401 "method": "bdev_wait_for_examine" 00:07:33.401 } 00:07:33.401 ] 00:07:33.401 } 00:07:33.401 ] 00:07:33.401 } 00:07:33.401 [2024-09-28 01:20:29.141312] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:33.401 [2024-09-28 01:20:29.141984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61587 ] 00:07:33.401 [2024-09-28 01:20:29.296172] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.661 [2024-09-28 01:20:29.468458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.920 [2024-09-28 01:20:29.621279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.857  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:34.857 00:07:34.857 01:20:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:34.857 00:07:34.857 real 0m40.175s 00:07:34.857 user 0m33.744s 00:07:34.857 sys 0m16.727s 00:07:34.857 01:20:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.857 ************************************ 00:07:34.857 END TEST spdk_dd_basic_rw 00:07:34.857 ************************************ 00:07:34.857 01:20:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:35.117 01:20:30 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:35.117 01:20:30 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.117 01:20:30 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.117 01:20:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:35.117 ************************************ 00:07:35.117 START TEST spdk_dd_posix 00:07:35.117 ************************************ 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:35.117 * Looking for test storage... 00:07:35.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lcov --version 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.117 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:35.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.118 --rc genhtml_branch_coverage=1 00:07:35.118 --rc genhtml_function_coverage=1 00:07:35.118 --rc genhtml_legend=1 00:07:35.118 --rc geninfo_all_blocks=1 00:07:35.118 --rc geninfo_unexecuted_blocks=1 00:07:35.118 00:07:35.118 ' 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:35.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.118 --rc genhtml_branch_coverage=1 00:07:35.118 --rc genhtml_function_coverage=1 00:07:35.118 --rc genhtml_legend=1 00:07:35.118 --rc geninfo_all_blocks=1 00:07:35.118 --rc geninfo_unexecuted_blocks=1 00:07:35.118 00:07:35.118 ' 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:35.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.118 --rc genhtml_branch_coverage=1 00:07:35.118 --rc genhtml_function_coverage=1 00:07:35.118 --rc genhtml_legend=1 00:07:35.118 --rc geninfo_all_blocks=1 00:07:35.118 --rc geninfo_unexecuted_blocks=1 00:07:35.118 00:07:35.118 ' 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:35.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.118 --rc genhtml_branch_coverage=1 00:07:35.118 --rc genhtml_function_coverage=1 00:07:35.118 --rc genhtml_legend=1 00:07:35.118 --rc geninfo_all_blocks=1 00:07:35.118 --rc geninfo_unexecuted_blocks=1 00:07:35.118 00:07:35.118 ' 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:35.118 * First test run, liburing in use 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.118 01:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:35.118 ************************************ 00:07:35.118 START TEST dd_flag_append 00:07:35.118 ************************************ 00:07:35.118 01:20:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:07:35.118 01:20:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:35.118 01:20:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:35.118 01:20:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:35.118 01:20:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:35.118 01:20:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:35.118 01:20:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=6knh2hbl7k9zr5584tuxbgs8fia3srqn 00:07:35.118 01:20:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:35.118 01:20:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:35.118 01:20:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:35.118 01:20:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=0nfufb3c2v11xjw5n427ubd9h78jzv7s 00:07:35.118 01:20:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 6knh2hbl7k9zr5584tuxbgs8fia3srqn 00:07:35.118 01:20:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 0nfufb3c2v11xjw5n427ubd9h78jzv7s 00:07:35.118 01:20:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:35.377 [2024-09-28 01:20:31.122479] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:35.377 [2024-09-28 01:20:31.122676] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61671 ] 00:07:35.377 [2024-09-28 01:20:31.301425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.636 [2024-09-28 01:20:31.529097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.895 [2024-09-28 01:20:31.692216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.832  Copying: 32/32 [B] (average 31 kBps) 00:07:36.832 00:07:37.091 01:20:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 0nfufb3c2v11xjw5n427ubd9h78jzv7s6knh2hbl7k9zr5584tuxbgs8fia3srqn == \0\n\f\u\f\b\3\c\2\v\1\1\x\j\w\5\n\4\2\7\u\b\d\9\h\7\8\j\z\v\7\s\6\k\n\h\2\h\b\l\7\k\9\z\r\5\5\8\4\t\u\x\b\g\s\8\f\i\a\3\s\r\q\n ]] 00:07:37.091 00:07:37.091 real 0m1.781s 00:07:37.092 user 0m1.452s 00:07:37.092 sys 0m0.847s 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:37.092 ************************************ 00:07:37.092 END TEST dd_flag_append 00:07:37.092 ************************************ 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:37.092 ************************************ 00:07:37.092 START TEST dd_flag_directory 00:07:37.092 ************************************ 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:37.092 01:20:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:37.092 [2024-09-28 01:20:32.951002] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:37.092 [2024-09-28 01:20:32.951222] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61717 ] 00:07:37.351 [2024-09-28 01:20:33.119567] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.351 [2024-09-28 01:20:33.283059] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.635 [2024-09-28 01:20:33.449013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.635 [2024-09-28 01:20:33.532371] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:37.635 [2024-09-28 01:20:33.532477] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:37.635 [2024-09-28 01:20:33.532500] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.579 [2024-09-28 01:20:34.156752] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:38.838 01:20:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:38.838 [2024-09-28 01:20:34.659012] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:38.838 [2024-09-28 01:20:34.659779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61739 ] 00:07:39.098 [2024-09-28 01:20:34.823712] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.098 [2024-09-28 01:20:34.974732] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.356 [2024-09-28 01:20:35.129329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.356 [2024-09-28 01:20:35.219169] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:39.356 [2024-09-28 01:20:35.219300] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:39.356 [2024-09-28 01:20:35.219326] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:39.925 [2024-09-28 01:20:35.853915] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:40.491 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:40.491 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.491 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:40.491 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:40.491 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:40.491 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.491 00:07:40.491 real 0m3.378s 00:07:40.491 user 0m2.752s 00:07:40.491 sys 0m0.403s 00:07:40.491 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.491 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:40.491 ************************************ 00:07:40.491 END TEST dd_flag_directory 00:07:40.491 ************************************ 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:40.492 ************************************ 00:07:40.492 START TEST dd_flag_nofollow 00:07:40.492 ************************************ 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:40.492 01:20:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.492 [2024-09-28 01:20:36.365030] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:40.492 [2024-09-28 01:20:36.365190] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61779 ] 00:07:40.751 [2024-09-28 01:20:36.520410] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.751 [2024-09-28 01:20:36.673234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.010 [2024-09-28 01:20:36.818649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.010 [2024-09-28 01:20:36.908021] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:41.010 [2024-09-28 01:20:36.908119] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:41.010 [2024-09-28 01:20:36.908142] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.577 [2024-09-28 01:20:37.498016] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.145 01:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:42.145 [2024-09-28 01:20:37.975860] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:42.145 [2024-09-28 01:20:37.976059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61801 ] 00:07:42.404 [2024-09-28 01:20:38.142046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.404 [2024-09-28 01:20:38.304169] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.663 [2024-09-28 01:20:38.458893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.663 [2024-09-28 01:20:38.537908] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:42.663 [2024-09-28 01:20:38.538001] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:42.663 [2024-09-28 01:20:38.538024] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.230 [2024-09-28 01:20:39.106504] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:43.797 01:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:43.797 01:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.797 01:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:43.797 01:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:43.797 01:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:43.797 01:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.797 01:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:43.797 01:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:43.797 01:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:43.797 01:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.797 [2024-09-28 01:20:39.641081] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:43.797 [2024-09-28 01:20:39.641273] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61826 ] 00:07:44.055 [2024-09-28 01:20:39.810082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.055 [2024-09-28 01:20:39.958682] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.313 [2024-09-28 01:20:40.117249] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.250  Copying: 512/512 [B] (average 500 kBps) 00:07:45.250 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ vkfmmhkocmvr96wik4nxuhwze59m0v80cd0x4hy3yid6yk47b10xgxvb49mtxqewpb35wvggzwxztpi2yubji2sv4tw0swf076vecam1q61ieqsl7rsww7q6mkl8vzxe04slkuhcaxekxkw1fi0f4zmw6gvvx2swyvvfktzh2zylx3uk28p9olcswx6omyggri79cw0cj07iwf17fmq9xlmgiqgav6f1l7b37mayyxw3a2oa0lo9xj5qdsnotewqt4rdqwiptdpr7punsr6c5qk7bvc8ho7cz84e0y4iov6y6m42olf3p0x9lpjvvwtw7ra62unlg2snliw7ivzrsytbza9rvcu5f3uo147hb2zpfdo1qdw66t2fm7dfddn5sg5hckf28mde27qv0zy3ui74wy2dwp51g6qe3i9nsclnqyd8v4sbebbfsific2sfpz4b6tpmvm35vxzvi9ktky6gp1h3k820wqpij8n457fuhjil7htf52p5ruxvj2lp == \v\k\f\m\m\h\k\o\c\m\v\r\9\6\w\i\k\4\n\x\u\h\w\z\e\5\9\m\0\v\8\0\c\d\0\x\4\h\y\3\y\i\d\6\y\k\4\7\b\1\0\x\g\x\v\b\4\9\m\t\x\q\e\w\p\b\3\5\w\v\g\g\z\w\x\z\t\p\i\2\y\u\b\j\i\2\s\v\4\t\w\0\s\w\f\0\7\6\v\e\c\a\m\1\q\6\1\i\e\q\s\l\7\r\s\w\w\7\q\6\m\k\l\8\v\z\x\e\0\4\s\l\k\u\h\c\a\x\e\k\x\k\w\1\f\i\0\f\4\z\m\w\6\g\v\v\x\2\s\w\y\v\v\f\k\t\z\h\2\z\y\l\x\3\u\k\2\8\p\9\o\l\c\s\w\x\6\o\m\y\g\g\r\i\7\9\c\w\0\c\j\0\7\i\w\f\1\7\f\m\q\9\x\l\m\g\i\q\g\a\v\6\f\1\l\7\b\3\7\m\a\y\y\x\w\3\a\2\o\a\0\l\o\9\x\j\5\q\d\s\n\o\t\e\w\q\t\4\r\d\q\w\i\p\t\d\p\r\7\p\u\n\s\r\6\c\5\q\k\7\b\v\c\8\h\o\7\c\z\8\4\e\0\y\4\i\o\v\6\y\6\m\4\2\o\l\f\3\p\0\x\9\l\p\j\v\v\w\t\w\7\r\a\6\2\u\n\l\g\2\s\n\l\i\w\7\i\v\z\r\s\y\t\b\z\a\9\r\v\c\u\5\f\3\u\o\1\4\7\h\b\2\z\p\f\d\o\1\q\d\w\6\6\t\2\f\m\7\d\f\d\d\n\5\s\g\5\h\c\k\f\2\8\m\d\e\2\7\q\v\0\z\y\3\u\i\7\4\w\y\2\d\w\p\5\1\g\6\q\e\3\i\9\n\s\c\l\n\q\y\d\8\v\4\s\b\e\b\b\f\s\i\f\i\c\2\s\f\p\z\4\b\6\t\p\m\v\m\3\5\v\x\z\v\i\9\k\t\k\y\6\g\p\1\h\3\k\8\2\0\w\q\p\i\j\8\n\4\5\7\f\u\h\j\i\l\7\h\t\f\5\2\p\5\r\u\x\v\j\2\l\p ]] 00:07:45.509 00:07:45.509 real 0m4.925s 00:07:45.509 user 0m3.983s 00:07:45.509 sys 0m1.239s 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:45.509 ************************************ 00:07:45.509 END TEST dd_flag_nofollow 00:07:45.509 ************************************ 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:45.509 ************************************ 00:07:45.509 START TEST dd_flag_noatime 00:07:45.509 ************************************ 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1727486440 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1727486441 00:07:45.509 01:20:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:46.446 01:20:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.705 [2024-09-28 01:20:42.378492] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:46.705 [2024-09-28 01:20:42.378698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61880 ] 00:07:46.705 [2024-09-28 01:20:42.543773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.963 [2024-09-28 01:20:42.724818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.963 [2024-09-28 01:20:42.889263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.184  Copying: 512/512 [B] (average 500 kBps) 00:07:48.184 00:07:48.184 01:20:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:48.184 01:20:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1727486440 )) 00:07:48.184 01:20:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.184 01:20:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1727486441 )) 00:07:48.184 01:20:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.184 [2024-09-28 01:20:44.099857] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:48.184 [2024-09-28 01:20:44.100028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61906 ] 00:07:48.456 [2024-09-28 01:20:44.270392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.715 [2024-09-28 01:20:44.435556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.715 [2024-09-28 01:20:44.591425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.912  Copying: 512/512 [B] (average 500 kBps) 00:07:49.912 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1727486444 )) 00:07:49.912 00:07:49.912 real 0m4.431s 00:07:49.912 user 0m2.788s 00:07:49.912 sys 0m1.638s 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:49.912 ************************************ 00:07:49.912 END TEST dd_flag_noatime 00:07:49.912 ************************************ 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:49.912 ************************************ 00:07:49.912 START TEST dd_flags_misc 00:07:49.912 ************************************ 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:49.912 01:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:50.170 [2024-09-28 01:20:45.854602] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:50.170 [2024-09-28 01:20:45.854797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61952 ] 00:07:50.170 [2024-09-28 01:20:46.026021] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.429 [2024-09-28 01:20:46.178087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.429 [2024-09-28 01:20:46.343931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.626  Copying: 512/512 [B] (average 500 kBps) 00:07:51.626 00:07:51.626 01:20:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ b4f9mckg76pi4pkuu4een152atlbpw86iucy76jkdjmnaecmjcfga435uyu588dbt8bqkhb0fc6whumfy0uge1hn8p0d2hc1s01dy4of0m2mg326krpqqc2jthgrliv8ymvuh5s5wh4ugbc1za3ro67lbnn02rwa6dzzjbczdmb4yvk4wywf6vuupxopgfel9o8thkiuuawsbrv03z4ftzhwe9d3g5q9zt1p2k1jkb06b5al3j8cqhyg7ot3kfwn0dlsu5ae5v8m14q9dyic9d8v4clqjaddfltw0wldw0518bngwtt2rb4o7o2qasucle5nxxbupfd8wjj0wztjxjqgv8rlisl171yyo9gresgieaoducbahv59uwhqirnam711jrx6tyz53ldf25bwfwth3u09ojzyvliyxunn1rrkovge5lqcz4pc9vcd9j2k1h2ny46ud3l3igovdp6a9gmf9yeza5mfl87gbje8uvinosy7ukbfn2auirvi3fkc == \b\4\f\9\m\c\k\g\7\6\p\i\4\p\k\u\u\4\e\e\n\1\5\2\a\t\l\b\p\w\8\6\i\u\c\y\7\6\j\k\d\j\m\n\a\e\c\m\j\c\f\g\a\4\3\5\u\y\u\5\8\8\d\b\t\8\b\q\k\h\b\0\f\c\6\w\h\u\m\f\y\0\u\g\e\1\h\n\8\p\0\d\2\h\c\1\s\0\1\d\y\4\o\f\0\m\2\m\g\3\2\6\k\r\p\q\q\c\2\j\t\h\g\r\l\i\v\8\y\m\v\u\h\5\s\5\w\h\4\u\g\b\c\1\z\a\3\r\o\6\7\l\b\n\n\0\2\r\w\a\6\d\z\z\j\b\c\z\d\m\b\4\y\v\k\4\w\y\w\f\6\v\u\u\p\x\o\p\g\f\e\l\9\o\8\t\h\k\i\u\u\a\w\s\b\r\v\0\3\z\4\f\t\z\h\w\e\9\d\3\g\5\q\9\z\t\1\p\2\k\1\j\k\b\0\6\b\5\a\l\3\j\8\c\q\h\y\g\7\o\t\3\k\f\w\n\0\d\l\s\u\5\a\e\5\v\8\m\1\4\q\9\d\y\i\c\9\d\8\v\4\c\l\q\j\a\d\d\f\l\t\w\0\w\l\d\w\0\5\1\8\b\n\g\w\t\t\2\r\b\4\o\7\o\2\q\a\s\u\c\l\e\5\n\x\x\b\u\p\f\d\8\w\j\j\0\w\z\t\j\x\j\q\g\v\8\r\l\i\s\l\1\7\1\y\y\o\9\g\r\e\s\g\i\e\a\o\d\u\c\b\a\h\v\5\9\u\w\h\q\i\r\n\a\m\7\1\1\j\r\x\6\t\y\z\5\3\l\d\f\2\5\b\w\f\w\t\h\3\u\0\9\o\j\z\y\v\l\i\y\x\u\n\n\1\r\r\k\o\v\g\e\5\l\q\c\z\4\p\c\9\v\c\d\9\j\2\k\1\h\2\n\y\4\6\u\d\3\l\3\i\g\o\v\d\p\6\a\9\g\m\f\9\y\e\z\a\5\m\f\l\8\7\g\b\j\e\8\u\v\i\n\o\s\y\7\u\k\b\f\n\2\a\u\i\r\v\i\3\f\k\c ]] 00:07:51.626 01:20:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:51.626 01:20:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:51.626 [2024-09-28 01:20:47.544534] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:51.626 [2024-09-28 01:20:47.544727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61968 ] 00:07:51.886 [2024-09-28 01:20:47.707014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.145 [2024-09-28 01:20:47.867764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.145 [2024-09-28 01:20:48.013858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.342  Copying: 512/512 [B] (average 500 kBps) 00:07:53.342 00:07:53.342 01:20:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ b4f9mckg76pi4pkuu4een152atlbpw86iucy76jkdjmnaecmjcfga435uyu588dbt8bqkhb0fc6whumfy0uge1hn8p0d2hc1s01dy4of0m2mg326krpqqc2jthgrliv8ymvuh5s5wh4ugbc1za3ro67lbnn02rwa6dzzjbczdmb4yvk4wywf6vuupxopgfel9o8thkiuuawsbrv03z4ftzhwe9d3g5q9zt1p2k1jkb06b5al3j8cqhyg7ot3kfwn0dlsu5ae5v8m14q9dyic9d8v4clqjaddfltw0wldw0518bngwtt2rb4o7o2qasucle5nxxbupfd8wjj0wztjxjqgv8rlisl171yyo9gresgieaoducbahv59uwhqirnam711jrx6tyz53ldf25bwfwth3u09ojzyvliyxunn1rrkovge5lqcz4pc9vcd9j2k1h2ny46ud3l3igovdp6a9gmf9yeza5mfl87gbje8uvinosy7ukbfn2auirvi3fkc == \b\4\f\9\m\c\k\g\7\6\p\i\4\p\k\u\u\4\e\e\n\1\5\2\a\t\l\b\p\w\8\6\i\u\c\y\7\6\j\k\d\j\m\n\a\e\c\m\j\c\f\g\a\4\3\5\u\y\u\5\8\8\d\b\t\8\b\q\k\h\b\0\f\c\6\w\h\u\m\f\y\0\u\g\e\1\h\n\8\p\0\d\2\h\c\1\s\0\1\d\y\4\o\f\0\m\2\m\g\3\2\6\k\r\p\q\q\c\2\j\t\h\g\r\l\i\v\8\y\m\v\u\h\5\s\5\w\h\4\u\g\b\c\1\z\a\3\r\o\6\7\l\b\n\n\0\2\r\w\a\6\d\z\z\j\b\c\z\d\m\b\4\y\v\k\4\w\y\w\f\6\v\u\u\p\x\o\p\g\f\e\l\9\o\8\t\h\k\i\u\u\a\w\s\b\r\v\0\3\z\4\f\t\z\h\w\e\9\d\3\g\5\q\9\z\t\1\p\2\k\1\j\k\b\0\6\b\5\a\l\3\j\8\c\q\h\y\g\7\o\t\3\k\f\w\n\0\d\l\s\u\5\a\e\5\v\8\m\1\4\q\9\d\y\i\c\9\d\8\v\4\c\l\q\j\a\d\d\f\l\t\w\0\w\l\d\w\0\5\1\8\b\n\g\w\t\t\2\r\b\4\o\7\o\2\q\a\s\u\c\l\e\5\n\x\x\b\u\p\f\d\8\w\j\j\0\w\z\t\j\x\j\q\g\v\8\r\l\i\s\l\1\7\1\y\y\o\9\g\r\e\s\g\i\e\a\o\d\u\c\b\a\h\v\5\9\u\w\h\q\i\r\n\a\m\7\1\1\j\r\x\6\t\y\z\5\3\l\d\f\2\5\b\w\f\w\t\h\3\u\0\9\o\j\z\y\v\l\i\y\x\u\n\n\1\r\r\k\o\v\g\e\5\l\q\c\z\4\p\c\9\v\c\d\9\j\2\k\1\h\2\n\y\4\6\u\d\3\l\3\i\g\o\v\d\p\6\a\9\g\m\f\9\y\e\z\a\5\m\f\l\8\7\g\b\j\e\8\u\v\i\n\o\s\y\7\u\k\b\f\n\2\a\u\i\r\v\i\3\f\k\c ]] 00:07:53.342 01:20:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.342 01:20:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:53.342 [2024-09-28 01:20:49.116607] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:53.342 [2024-09-28 01:20:49.116754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61996 ] 00:07:53.342 [2024-09-28 01:20:49.269238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.601 [2024-09-28 01:20:49.418346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.860 [2024-09-28 01:20:49.570712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.797  Copying: 512/512 [B] (average 125 kBps) 00:07:54.797 00:07:54.797 01:20:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ b4f9mckg76pi4pkuu4een152atlbpw86iucy76jkdjmnaecmjcfga435uyu588dbt8bqkhb0fc6whumfy0uge1hn8p0d2hc1s01dy4of0m2mg326krpqqc2jthgrliv8ymvuh5s5wh4ugbc1za3ro67lbnn02rwa6dzzjbczdmb4yvk4wywf6vuupxopgfel9o8thkiuuawsbrv03z4ftzhwe9d3g5q9zt1p2k1jkb06b5al3j8cqhyg7ot3kfwn0dlsu5ae5v8m14q9dyic9d8v4clqjaddfltw0wldw0518bngwtt2rb4o7o2qasucle5nxxbupfd8wjj0wztjxjqgv8rlisl171yyo9gresgieaoducbahv59uwhqirnam711jrx6tyz53ldf25bwfwth3u09ojzyvliyxunn1rrkovge5lqcz4pc9vcd9j2k1h2ny46ud3l3igovdp6a9gmf9yeza5mfl87gbje8uvinosy7ukbfn2auirvi3fkc == \b\4\f\9\m\c\k\g\7\6\p\i\4\p\k\u\u\4\e\e\n\1\5\2\a\t\l\b\p\w\8\6\i\u\c\y\7\6\j\k\d\j\m\n\a\e\c\m\j\c\f\g\a\4\3\5\u\y\u\5\8\8\d\b\t\8\b\q\k\h\b\0\f\c\6\w\h\u\m\f\y\0\u\g\e\1\h\n\8\p\0\d\2\h\c\1\s\0\1\d\y\4\o\f\0\m\2\m\g\3\2\6\k\r\p\q\q\c\2\j\t\h\g\r\l\i\v\8\y\m\v\u\h\5\s\5\w\h\4\u\g\b\c\1\z\a\3\r\o\6\7\l\b\n\n\0\2\r\w\a\6\d\z\z\j\b\c\z\d\m\b\4\y\v\k\4\w\y\w\f\6\v\u\u\p\x\o\p\g\f\e\l\9\o\8\t\h\k\i\u\u\a\w\s\b\r\v\0\3\z\4\f\t\z\h\w\e\9\d\3\g\5\q\9\z\t\1\p\2\k\1\j\k\b\0\6\b\5\a\l\3\j\8\c\q\h\y\g\7\o\t\3\k\f\w\n\0\d\l\s\u\5\a\e\5\v\8\m\1\4\q\9\d\y\i\c\9\d\8\v\4\c\l\q\j\a\d\d\f\l\t\w\0\w\l\d\w\0\5\1\8\b\n\g\w\t\t\2\r\b\4\o\7\o\2\q\a\s\u\c\l\e\5\n\x\x\b\u\p\f\d\8\w\j\j\0\w\z\t\j\x\j\q\g\v\8\r\l\i\s\l\1\7\1\y\y\o\9\g\r\e\s\g\i\e\a\o\d\u\c\b\a\h\v\5\9\u\w\h\q\i\r\n\a\m\7\1\1\j\r\x\6\t\y\z\5\3\l\d\f\2\5\b\w\f\w\t\h\3\u\0\9\o\j\z\y\v\l\i\y\x\u\n\n\1\r\r\k\o\v\g\e\5\l\q\c\z\4\p\c\9\v\c\d\9\j\2\k\1\h\2\n\y\4\6\u\d\3\l\3\i\g\o\v\d\p\6\a\9\g\m\f\9\y\e\z\a\5\m\f\l\8\7\g\b\j\e\8\u\v\i\n\o\s\y\7\u\k\b\f\n\2\a\u\i\r\v\i\3\f\k\c ]] 00:07:54.797 01:20:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.797 01:20:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:55.056 [2024-09-28 01:20:50.779435] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:55.056 [2024-09-28 01:20:50.779607] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62017 ] 00:07:55.056 [2024-09-28 01:20:50.937423] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.314 [2024-09-28 01:20:51.106243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.573 [2024-09-28 01:20:51.263061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.511  Copying: 512/512 [B] (average 500 kBps) 00:07:56.511 00:07:56.511 01:20:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ b4f9mckg76pi4pkuu4een152atlbpw86iucy76jkdjmnaecmjcfga435uyu588dbt8bqkhb0fc6whumfy0uge1hn8p0d2hc1s01dy4of0m2mg326krpqqc2jthgrliv8ymvuh5s5wh4ugbc1za3ro67lbnn02rwa6dzzjbczdmb4yvk4wywf6vuupxopgfel9o8thkiuuawsbrv03z4ftzhwe9d3g5q9zt1p2k1jkb06b5al3j8cqhyg7ot3kfwn0dlsu5ae5v8m14q9dyic9d8v4clqjaddfltw0wldw0518bngwtt2rb4o7o2qasucle5nxxbupfd8wjj0wztjxjqgv8rlisl171yyo9gresgieaoducbahv59uwhqirnam711jrx6tyz53ldf25bwfwth3u09ojzyvliyxunn1rrkovge5lqcz4pc9vcd9j2k1h2ny46ud3l3igovdp6a9gmf9yeza5mfl87gbje8uvinosy7ukbfn2auirvi3fkc == \b\4\f\9\m\c\k\g\7\6\p\i\4\p\k\u\u\4\e\e\n\1\5\2\a\t\l\b\p\w\8\6\i\u\c\y\7\6\j\k\d\j\m\n\a\e\c\m\j\c\f\g\a\4\3\5\u\y\u\5\8\8\d\b\t\8\b\q\k\h\b\0\f\c\6\w\h\u\m\f\y\0\u\g\e\1\h\n\8\p\0\d\2\h\c\1\s\0\1\d\y\4\o\f\0\m\2\m\g\3\2\6\k\r\p\q\q\c\2\j\t\h\g\r\l\i\v\8\y\m\v\u\h\5\s\5\w\h\4\u\g\b\c\1\z\a\3\r\o\6\7\l\b\n\n\0\2\r\w\a\6\d\z\z\j\b\c\z\d\m\b\4\y\v\k\4\w\y\w\f\6\v\u\u\p\x\o\p\g\f\e\l\9\o\8\t\h\k\i\u\u\a\w\s\b\r\v\0\3\z\4\f\t\z\h\w\e\9\d\3\g\5\q\9\z\t\1\p\2\k\1\j\k\b\0\6\b\5\a\l\3\j\8\c\q\h\y\g\7\o\t\3\k\f\w\n\0\d\l\s\u\5\a\e\5\v\8\m\1\4\q\9\d\y\i\c\9\d\8\v\4\c\l\q\j\a\d\d\f\l\t\w\0\w\l\d\w\0\5\1\8\b\n\g\w\t\t\2\r\b\4\o\7\o\2\q\a\s\u\c\l\e\5\n\x\x\b\u\p\f\d\8\w\j\j\0\w\z\t\j\x\j\q\g\v\8\r\l\i\s\l\1\7\1\y\y\o\9\g\r\e\s\g\i\e\a\o\d\u\c\b\a\h\v\5\9\u\w\h\q\i\r\n\a\m\7\1\1\j\r\x\6\t\y\z\5\3\l\d\f\2\5\b\w\f\w\t\h\3\u\0\9\o\j\z\y\v\l\i\y\x\u\n\n\1\r\r\k\o\v\g\e\5\l\q\c\z\4\p\c\9\v\c\d\9\j\2\k\1\h\2\n\y\4\6\u\d\3\l\3\i\g\o\v\d\p\6\a\9\g\m\f\9\y\e\z\a\5\m\f\l\8\7\g\b\j\e\8\u\v\i\n\o\s\y\7\u\k\b\f\n\2\a\u\i\r\v\i\3\f\k\c ]] 00:07:56.511 01:20:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:56.511 01:20:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:56.511 01:20:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:56.511 01:20:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:56.511 01:20:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.511 01:20:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:56.511 [2024-09-28 01:20:52.420213] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:56.511 [2024-09-28 01:20:52.420381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62039 ] 00:07:56.771 [2024-09-28 01:20:52.576756] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.030 [2024-09-28 01:20:52.731928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.030 [2024-09-28 01:20:52.888486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.225  Copying: 512/512 [B] (average 500 kBps) 00:07:58.225 00:07:58.225 01:20:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4xspo8n41osavzkwrbou681ma4gmmx6qyn1v124dtdfck3yhdkrlmyiju0e2fxp6xe4406bqs8nkq78phdsxl9c1rb7wjq29zugi926e5kcnlsa3ormum1p6l3is5hfppuznhx3ceha0zesj20x4y2fvlclshdl4k4syx664guv5enloamragcr1vg1g0o2f8ucefa6dqmi85z4o479drjhyl3041t7v3kvj108t8esae9nrexdyr843pz6vu1ct3i7prhx1bh4cgfcg2q2xhzinp6osxu9gsiu8u4onml7g94gvhvn713fr6drbhiygnvlz0x0c3e6hmcdmomwst8y4p1coh075zlo1e6ptj8o74kg1r93uk0f5v4boizwa9gehh198yvtt8zgfovcltzj23nteq52kcm6ehoqcw77oqh9j2tg6y3rr160pbbekv321yazq5mu3k80dsqzuza6dw9bfsvbys01v2osmryhmcqci2kr1rdot05rspqyj == \4\x\s\p\o\8\n\4\1\o\s\a\v\z\k\w\r\b\o\u\6\8\1\m\a\4\g\m\m\x\6\q\y\n\1\v\1\2\4\d\t\d\f\c\k\3\y\h\d\k\r\l\m\y\i\j\u\0\e\2\f\x\p\6\x\e\4\4\0\6\b\q\s\8\n\k\q\7\8\p\h\d\s\x\l\9\c\1\r\b\7\w\j\q\2\9\z\u\g\i\9\2\6\e\5\k\c\n\l\s\a\3\o\r\m\u\m\1\p\6\l\3\i\s\5\h\f\p\p\u\z\n\h\x\3\c\e\h\a\0\z\e\s\j\2\0\x\4\y\2\f\v\l\c\l\s\h\d\l\4\k\4\s\y\x\6\6\4\g\u\v\5\e\n\l\o\a\m\r\a\g\c\r\1\v\g\1\g\0\o\2\f\8\u\c\e\f\a\6\d\q\m\i\8\5\z\4\o\4\7\9\d\r\j\h\y\l\3\0\4\1\t\7\v\3\k\v\j\1\0\8\t\8\e\s\a\e\9\n\r\e\x\d\y\r\8\4\3\p\z\6\v\u\1\c\t\3\i\7\p\r\h\x\1\b\h\4\c\g\f\c\g\2\q\2\x\h\z\i\n\p\6\o\s\x\u\9\g\s\i\u\8\u\4\o\n\m\l\7\g\9\4\g\v\h\v\n\7\1\3\f\r\6\d\r\b\h\i\y\g\n\v\l\z\0\x\0\c\3\e\6\h\m\c\d\m\o\m\w\s\t\8\y\4\p\1\c\o\h\0\7\5\z\l\o\1\e\6\p\t\j\8\o\7\4\k\g\1\r\9\3\u\k\0\f\5\v\4\b\o\i\z\w\a\9\g\e\h\h\1\9\8\y\v\t\t\8\z\g\f\o\v\c\l\t\z\j\2\3\n\t\e\q\5\2\k\c\m\6\e\h\o\q\c\w\7\7\o\q\h\9\j\2\t\g\6\y\3\r\r\1\6\0\p\b\b\e\k\v\3\2\1\y\a\z\q\5\m\u\3\k\8\0\d\s\q\z\u\z\a\6\d\w\9\b\f\s\v\b\y\s\0\1\v\2\o\s\m\r\y\h\m\c\q\c\i\2\k\r\1\r\d\o\t\0\5\r\s\p\q\y\j ]] 00:07:58.225 01:20:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:58.225 01:20:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:58.225 [2024-09-28 01:20:54.026047] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:58.225 [2024-09-28 01:20:54.026204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62066 ] 00:07:58.484 [2024-09-28 01:20:54.175606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.484 [2024-09-28 01:20:54.332773] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.780 [2024-09-28 01:20:54.478085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.742  Copying: 512/512 [B] (average 500 kBps) 00:07:59.742 00:07:59.742 01:20:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4xspo8n41osavzkwrbou681ma4gmmx6qyn1v124dtdfck3yhdkrlmyiju0e2fxp6xe4406bqs8nkq78phdsxl9c1rb7wjq29zugi926e5kcnlsa3ormum1p6l3is5hfppuznhx3ceha0zesj20x4y2fvlclshdl4k4syx664guv5enloamragcr1vg1g0o2f8ucefa6dqmi85z4o479drjhyl3041t7v3kvj108t8esae9nrexdyr843pz6vu1ct3i7prhx1bh4cgfcg2q2xhzinp6osxu9gsiu8u4onml7g94gvhvn713fr6drbhiygnvlz0x0c3e6hmcdmomwst8y4p1coh075zlo1e6ptj8o74kg1r93uk0f5v4boizwa9gehh198yvtt8zgfovcltzj23nteq52kcm6ehoqcw77oqh9j2tg6y3rr160pbbekv321yazq5mu3k80dsqzuza6dw9bfsvbys01v2osmryhmcqci2kr1rdot05rspqyj == \4\x\s\p\o\8\n\4\1\o\s\a\v\z\k\w\r\b\o\u\6\8\1\m\a\4\g\m\m\x\6\q\y\n\1\v\1\2\4\d\t\d\f\c\k\3\y\h\d\k\r\l\m\y\i\j\u\0\e\2\f\x\p\6\x\e\4\4\0\6\b\q\s\8\n\k\q\7\8\p\h\d\s\x\l\9\c\1\r\b\7\w\j\q\2\9\z\u\g\i\9\2\6\e\5\k\c\n\l\s\a\3\o\r\m\u\m\1\p\6\l\3\i\s\5\h\f\p\p\u\z\n\h\x\3\c\e\h\a\0\z\e\s\j\2\0\x\4\y\2\f\v\l\c\l\s\h\d\l\4\k\4\s\y\x\6\6\4\g\u\v\5\e\n\l\o\a\m\r\a\g\c\r\1\v\g\1\g\0\o\2\f\8\u\c\e\f\a\6\d\q\m\i\8\5\z\4\o\4\7\9\d\r\j\h\y\l\3\0\4\1\t\7\v\3\k\v\j\1\0\8\t\8\e\s\a\e\9\n\r\e\x\d\y\r\8\4\3\p\z\6\v\u\1\c\t\3\i\7\p\r\h\x\1\b\h\4\c\g\f\c\g\2\q\2\x\h\z\i\n\p\6\o\s\x\u\9\g\s\i\u\8\u\4\o\n\m\l\7\g\9\4\g\v\h\v\n\7\1\3\f\r\6\d\r\b\h\i\y\g\n\v\l\z\0\x\0\c\3\e\6\h\m\c\d\m\o\m\w\s\t\8\y\4\p\1\c\o\h\0\7\5\z\l\o\1\e\6\p\t\j\8\o\7\4\k\g\1\r\9\3\u\k\0\f\5\v\4\b\o\i\z\w\a\9\g\e\h\h\1\9\8\y\v\t\t\8\z\g\f\o\v\c\l\t\z\j\2\3\n\t\e\q\5\2\k\c\m\6\e\h\o\q\c\w\7\7\o\q\h\9\j\2\t\g\6\y\3\r\r\1\6\0\p\b\b\e\k\v\3\2\1\y\a\z\q\5\m\u\3\k\8\0\d\s\q\z\u\z\a\6\d\w\9\b\f\s\v\b\y\s\0\1\v\2\o\s\m\r\y\h\m\c\q\c\i\2\k\r\1\r\d\o\t\0\5\r\s\p\q\y\j ]] 00:07:59.742 01:20:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:59.742 01:20:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:00.001 [2024-09-28 01:20:55.682475] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:00.001 [2024-09-28 01:20:55.682651] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62082 ] 00:08:00.001 [2024-09-28 01:20:55.848206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.259 [2024-09-28 01:20:56.013957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.259 [2024-09-28 01:20:56.171270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.454  Copying: 512/512 [B] (average 250 kBps) 00:08:01.454 00:08:01.454 01:20:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4xspo8n41osavzkwrbou681ma4gmmx6qyn1v124dtdfck3yhdkrlmyiju0e2fxp6xe4406bqs8nkq78phdsxl9c1rb7wjq29zugi926e5kcnlsa3ormum1p6l3is5hfppuznhx3ceha0zesj20x4y2fvlclshdl4k4syx664guv5enloamragcr1vg1g0o2f8ucefa6dqmi85z4o479drjhyl3041t7v3kvj108t8esae9nrexdyr843pz6vu1ct3i7prhx1bh4cgfcg2q2xhzinp6osxu9gsiu8u4onml7g94gvhvn713fr6drbhiygnvlz0x0c3e6hmcdmomwst8y4p1coh075zlo1e6ptj8o74kg1r93uk0f5v4boizwa9gehh198yvtt8zgfovcltzj23nteq52kcm6ehoqcw77oqh9j2tg6y3rr160pbbekv321yazq5mu3k80dsqzuza6dw9bfsvbys01v2osmryhmcqci2kr1rdot05rspqyj == \4\x\s\p\o\8\n\4\1\o\s\a\v\z\k\w\r\b\o\u\6\8\1\m\a\4\g\m\m\x\6\q\y\n\1\v\1\2\4\d\t\d\f\c\k\3\y\h\d\k\r\l\m\y\i\j\u\0\e\2\f\x\p\6\x\e\4\4\0\6\b\q\s\8\n\k\q\7\8\p\h\d\s\x\l\9\c\1\r\b\7\w\j\q\2\9\z\u\g\i\9\2\6\e\5\k\c\n\l\s\a\3\o\r\m\u\m\1\p\6\l\3\i\s\5\h\f\p\p\u\z\n\h\x\3\c\e\h\a\0\z\e\s\j\2\0\x\4\y\2\f\v\l\c\l\s\h\d\l\4\k\4\s\y\x\6\6\4\g\u\v\5\e\n\l\o\a\m\r\a\g\c\r\1\v\g\1\g\0\o\2\f\8\u\c\e\f\a\6\d\q\m\i\8\5\z\4\o\4\7\9\d\r\j\h\y\l\3\0\4\1\t\7\v\3\k\v\j\1\0\8\t\8\e\s\a\e\9\n\r\e\x\d\y\r\8\4\3\p\z\6\v\u\1\c\t\3\i\7\p\r\h\x\1\b\h\4\c\g\f\c\g\2\q\2\x\h\z\i\n\p\6\o\s\x\u\9\g\s\i\u\8\u\4\o\n\m\l\7\g\9\4\g\v\h\v\n\7\1\3\f\r\6\d\r\b\h\i\y\g\n\v\l\z\0\x\0\c\3\e\6\h\m\c\d\m\o\m\w\s\t\8\y\4\p\1\c\o\h\0\7\5\z\l\o\1\e\6\p\t\j\8\o\7\4\k\g\1\r\9\3\u\k\0\f\5\v\4\b\o\i\z\w\a\9\g\e\h\h\1\9\8\y\v\t\t\8\z\g\f\o\v\c\l\t\z\j\2\3\n\t\e\q\5\2\k\c\m\6\e\h\o\q\c\w\7\7\o\q\h\9\j\2\t\g\6\y\3\r\r\1\6\0\p\b\b\e\k\v\3\2\1\y\a\z\q\5\m\u\3\k\8\0\d\s\q\z\u\z\a\6\d\w\9\b\f\s\v\b\y\s\0\1\v\2\o\s\m\r\y\h\m\c\q\c\i\2\k\r\1\r\d\o\t\0\5\r\s\p\q\y\j ]] 00:08:01.454 01:20:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:01.454 01:20:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:01.454 [2024-09-28 01:20:57.355112] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:01.454 [2024-09-28 01:20:57.355309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62109 ] 00:08:01.713 [2024-09-28 01:20:57.521692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.973 [2024-09-28 01:20:57.669718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.973 [2024-09-28 01:20:57.821766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.168  Copying: 512/512 [B] (average 250 kBps) 00:08:03.168 00:08:03.168 ************************************ 00:08:03.168 END TEST dd_flags_misc 00:08:03.168 ************************************ 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4xspo8n41osavzkwrbou681ma4gmmx6qyn1v124dtdfck3yhdkrlmyiju0e2fxp6xe4406bqs8nkq78phdsxl9c1rb7wjq29zugi926e5kcnlsa3ormum1p6l3is5hfppuznhx3ceha0zesj20x4y2fvlclshdl4k4syx664guv5enloamragcr1vg1g0o2f8ucefa6dqmi85z4o479drjhyl3041t7v3kvj108t8esae9nrexdyr843pz6vu1ct3i7prhx1bh4cgfcg2q2xhzinp6osxu9gsiu8u4onml7g94gvhvn713fr6drbhiygnvlz0x0c3e6hmcdmomwst8y4p1coh075zlo1e6ptj8o74kg1r93uk0f5v4boizwa9gehh198yvtt8zgfovcltzj23nteq52kcm6ehoqcw77oqh9j2tg6y3rr160pbbekv321yazq5mu3k80dsqzuza6dw9bfsvbys01v2osmryhmcqci2kr1rdot05rspqyj == \4\x\s\p\o\8\n\4\1\o\s\a\v\z\k\w\r\b\o\u\6\8\1\m\a\4\g\m\m\x\6\q\y\n\1\v\1\2\4\d\t\d\f\c\k\3\y\h\d\k\r\l\m\y\i\j\u\0\e\2\f\x\p\6\x\e\4\4\0\6\b\q\s\8\n\k\q\7\8\p\h\d\s\x\l\9\c\1\r\b\7\w\j\q\2\9\z\u\g\i\9\2\6\e\5\k\c\n\l\s\a\3\o\r\m\u\m\1\p\6\l\3\i\s\5\h\f\p\p\u\z\n\h\x\3\c\e\h\a\0\z\e\s\j\2\0\x\4\y\2\f\v\l\c\l\s\h\d\l\4\k\4\s\y\x\6\6\4\g\u\v\5\e\n\l\o\a\m\r\a\g\c\r\1\v\g\1\g\0\o\2\f\8\u\c\e\f\a\6\d\q\m\i\8\5\z\4\o\4\7\9\d\r\j\h\y\l\3\0\4\1\t\7\v\3\k\v\j\1\0\8\t\8\e\s\a\e\9\n\r\e\x\d\y\r\8\4\3\p\z\6\v\u\1\c\t\3\i\7\p\r\h\x\1\b\h\4\c\g\f\c\g\2\q\2\x\h\z\i\n\p\6\o\s\x\u\9\g\s\i\u\8\u\4\o\n\m\l\7\g\9\4\g\v\h\v\n\7\1\3\f\r\6\d\r\b\h\i\y\g\n\v\l\z\0\x\0\c\3\e\6\h\m\c\d\m\o\m\w\s\t\8\y\4\p\1\c\o\h\0\7\5\z\l\o\1\e\6\p\t\j\8\o\7\4\k\g\1\r\9\3\u\k\0\f\5\v\4\b\o\i\z\w\a\9\g\e\h\h\1\9\8\y\v\t\t\8\z\g\f\o\v\c\l\t\z\j\2\3\n\t\e\q\5\2\k\c\m\6\e\h\o\q\c\w\7\7\o\q\h\9\j\2\t\g\6\y\3\r\r\1\6\0\p\b\b\e\k\v\3\2\1\y\a\z\q\5\m\u\3\k\8\0\d\s\q\z\u\z\a\6\d\w\9\b\f\s\v\b\y\s\0\1\v\2\o\s\m\r\y\h\m\c\q\c\i\2\k\r\1\r\d\o\t\0\5\r\s\p\q\y\j ]] 00:08:03.168 00:08:03.168 real 0m13.149s 00:08:03.168 user 0m10.783s 00:08:03.168 sys 0m6.439s 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:03.168 * Second test run, disabling liburing, forcing AIO 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:03.168 ************************************ 00:08:03.168 START TEST dd_flag_append_forced_aio 00:08:03.168 ************************************ 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=y1v9ingb4g8t7ue8546spe9au0l74sl1 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=w9p92fi6iuclnob0db5k618uvomcnnx9 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s y1v9ingb4g8t7ue8546spe9au0l74sl1 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s w9p92fi6iuclnob0db5k618uvomcnnx9 00:08:03.168 01:20:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:03.168 [2024-09-28 01:20:59.050936] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:03.168 [2024-09-28 01:20:59.051120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62150 ] 00:08:03.428 [2024-09-28 01:20:59.219762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.687 [2024-09-28 01:20:59.375818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.687 [2024-09-28 01:20:59.525666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.063  Copying: 32/32 [B] (average 31 kBps) 00:08:05.063 00:08:05.063 01:21:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ w9p92fi6iuclnob0db5k618uvomcnnx9y1v9ingb4g8t7ue8546spe9au0l74sl1 == \w\9\p\9\2\f\i\6\i\u\c\l\n\o\b\0\d\b\5\k\6\1\8\u\v\o\m\c\n\n\x\9\y\1\v\9\i\n\g\b\4\g\8\t\7\u\e\8\5\4\6\s\p\e\9\a\u\0\l\7\4\s\l\1 ]] 00:08:05.063 00:08:05.063 real 0m1.717s 00:08:05.063 user 0m1.393s 00:08:05.063 sys 0m0.201s 00:08:05.063 01:21:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.063 01:21:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:05.063 ************************************ 00:08:05.063 END TEST dd_flag_append_forced_aio 00:08:05.063 ************************************ 00:08:05.063 01:21:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:05.063 01:21:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.063 01:21:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.063 01:21:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:05.063 ************************************ 00:08:05.063 START TEST dd_flag_directory_forced_aio 00:08:05.063 ************************************ 00:08:05.063 01:21:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:08:05.063 01:21:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:05.063 01:21:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:05.064 01:21:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:05.064 01:21:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.064 01:21:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.064 01:21:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.064 01:21:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.064 01:21:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.064 01:21:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.064 01:21:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.064 01:21:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.064 01:21:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:05.064 [2024-09-28 01:21:00.817901] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:05.064 [2024-09-28 01:21:00.818075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62188 ] 00:08:05.064 [2024-09-28 01:21:00.983600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.323 [2024-09-28 01:21:01.149545] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.582 [2024-09-28 01:21:01.309912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.582 [2024-09-28 01:21:01.393521] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.582 [2024-09-28 01:21:01.393598] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.582 [2024-09-28 01:21:01.393619] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.149 [2024-09-28 01:21:02.046280] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.717 01:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:06.717 [2024-09-28 01:21:02.548166] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:06.717 [2024-09-28 01:21:02.548313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62215 ] 00:08:06.976 [2024-09-28 01:21:02.705282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.976 [2024-09-28 01:21:02.884238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.234 [2024-09-28 01:21:03.050001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.234 [2024-09-28 01:21:03.139888] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:07.234 [2024-09-28 01:21:03.139972] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:07.234 [2024-09-28 01:21:03.139995] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.169 [2024-09-28 01:21:03.798108] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.427 00:08:08.427 real 0m3.516s 00:08:08.427 user 0m2.902s 00:08:08.427 sys 0m0.387s 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:08.427 ************************************ 00:08:08.427 END TEST dd_flag_directory_forced_aio 00:08:08.427 ************************************ 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:08.427 ************************************ 00:08:08.427 START TEST dd_flag_nofollow_forced_aio 00:08:08.427 ************************************ 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.427 01:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:08.685 [2024-09-28 01:21:04.375591] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:08.685 [2024-09-28 01:21:04.375734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62256 ] 00:08:08.685 [2024-09-28 01:21:04.534107] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.943 [2024-09-28 01:21:04.776639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.209 [2024-09-28 01:21:04.980515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.209 [2024-09-28 01:21:05.075664] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:09.209 [2024-09-28 01:21:05.075732] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:09.209 [2024-09-28 01:21:05.075754] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.790 [2024-09-28 01:21:05.680508] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:10.440 01:21:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:10.440 [2024-09-28 01:21:06.174115] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:10.440 [2024-09-28 01:21:06.174302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62279 ] 00:08:10.440 [2024-09-28 01:21:06.342573] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.699 [2024-09-28 01:21:06.492025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.958 [2024-09-28 01:21:06.652655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.958 [2024-09-28 01:21:06.734707] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:10.958 [2024-09-28 01:21:06.734784] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:10.958 [2024-09-28 01:21:06.734807] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.526 [2024-09-28 01:21:07.349673] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:11.784 01:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:08:11.784 01:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:11.784 01:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:08:11.784 01:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:11.784 01:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:11.784 01:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:11.784 01:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:11.784 01:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:11.784 01:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:12.043 01:21:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:12.043 [2024-09-28 01:21:07.829757] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:12.043 [2024-09-28 01:21:07.830204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62302 ] 00:08:12.302 [2024-09-28 01:21:07.997199] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.302 [2024-09-28 01:21:08.162529] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.560 [2024-09-28 01:21:08.317761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.495  Copying: 512/512 [B] (average 500 kBps) 00:08:13.495 00:08:13.495 ************************************ 00:08:13.495 END TEST dd_flag_nofollow_forced_aio 00:08:13.495 ************************************ 00:08:13.495 01:21:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 5bhccdgfa0btfjz4fz25wimkofzy36aoiddse768g1aajtrv9206cm28chdmi9hobo8rewxdkgt531qafu2vplyy0ybr93vuds8cuciip3ia2klutdp8gydqfsz36c7wzldc320vkyh0izxehf02rvjyf5ds7s1yxn56gmflyevmzop7h93d46w0u52uynegb6o6eh1052d7bxsvomnxukmr3kqj3tqulicdpp6qp85yd8yns2fh9z46wooan49ego58nnmdrubott3kh9lsnegipphkx8fg5qlgzllqiatmbn4amcu8bs907h86inp2bbpw2tnmau0us2ljicnv41okdhod37s56ltrnbxy2x4rsdozgw5xz9jvwr89ps8u8cp0yu24gg60xi5aphtg5znnvsoc4zsol8gy1su48blrb439tdwnl7cz5drvvc4mc1ta9va50xxcc2sbj0qdmokxupph0l0e3iscjuikqk1uh6mjijq4ckr2s2fpkjsk == \5\b\h\c\c\d\g\f\a\0\b\t\f\j\z\4\f\z\2\5\w\i\m\k\o\f\z\y\3\6\a\o\i\d\d\s\e\7\6\8\g\1\a\a\j\t\r\v\9\2\0\6\c\m\2\8\c\h\d\m\i\9\h\o\b\o\8\r\e\w\x\d\k\g\t\5\3\1\q\a\f\u\2\v\p\l\y\y\0\y\b\r\9\3\v\u\d\s\8\c\u\c\i\i\p\3\i\a\2\k\l\u\t\d\p\8\g\y\d\q\f\s\z\3\6\c\7\w\z\l\d\c\3\2\0\v\k\y\h\0\i\z\x\e\h\f\0\2\r\v\j\y\f\5\d\s\7\s\1\y\x\n\5\6\g\m\f\l\y\e\v\m\z\o\p\7\h\9\3\d\4\6\w\0\u\5\2\u\y\n\e\g\b\6\o\6\e\h\1\0\5\2\d\7\b\x\s\v\o\m\n\x\u\k\m\r\3\k\q\j\3\t\q\u\l\i\c\d\p\p\6\q\p\8\5\y\d\8\y\n\s\2\f\h\9\z\4\6\w\o\o\a\n\4\9\e\g\o\5\8\n\n\m\d\r\u\b\o\t\t\3\k\h\9\l\s\n\e\g\i\p\p\h\k\x\8\f\g\5\q\l\g\z\l\l\q\i\a\t\m\b\n\4\a\m\c\u\8\b\s\9\0\7\h\8\6\i\n\p\2\b\b\p\w\2\t\n\m\a\u\0\u\s\2\l\j\i\c\n\v\4\1\o\k\d\h\o\d\3\7\s\5\6\l\t\r\n\b\x\y\2\x\4\r\s\d\o\z\g\w\5\x\z\9\j\v\w\r\8\9\p\s\8\u\8\c\p\0\y\u\2\4\g\g\6\0\x\i\5\a\p\h\t\g\5\z\n\n\v\s\o\c\4\z\s\o\l\8\g\y\1\s\u\4\8\b\l\r\b\4\3\9\t\d\w\n\l\7\c\z\5\d\r\v\v\c\4\m\c\1\t\a\9\v\a\5\0\x\x\c\c\2\s\b\j\0\q\d\m\o\k\x\u\p\p\h\0\l\0\e\3\i\s\c\j\u\i\k\q\k\1\u\h\6\m\j\i\j\q\4\c\k\r\2\s\2\f\p\k\j\s\k ]] 00:08:13.495 00:08:13.495 real 0m5.147s 00:08:13.495 user 0m4.229s 00:08:13.495 sys 0m0.567s 00:08:13.495 01:21:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.495 01:21:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:13.753 01:21:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:13.753 01:21:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:13.753 01:21:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.753 01:21:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:13.753 ************************************ 00:08:13.753 START TEST dd_flag_noatime_forced_aio 00:08:13.753 ************************************ 00:08:13.753 01:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:08:13.753 01:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:13.753 01:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:13.753 01:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:13.753 01:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:13.753 01:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:13.753 01:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:13.753 01:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1727486468 00:08:13.753 01:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.754 01:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1727486469 00:08:13.754 01:21:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:14.689 01:21:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.689 [2024-09-28 01:21:10.607363] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:14.689 [2024-09-28 01:21:10.607576] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62355 ] 00:08:14.946 [2024-09-28 01:21:10.781647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.205 [2024-09-28 01:21:10.987323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.463 [2024-09-28 01:21:11.149266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.400  Copying: 512/512 [B] (average 500 kBps) 00:08:16.400 00:08:16.400 01:21:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:16.400 01:21:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1727486468 )) 00:08:16.400 01:21:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:16.400 01:21:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1727486469 )) 00:08:16.400 01:21:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:16.658 [2024-09-28 01:21:12.346326] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:16.658 [2024-09-28 01:21:12.346883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62384 ] 00:08:16.658 [2024-09-28 01:21:12.509428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.917 [2024-09-28 01:21:12.668630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.917 [2024-09-28 01:21:12.822279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.111  Copying: 512/512 [B] (average 500 kBps) 00:08:18.111 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.111 ************************************ 00:08:18.111 END TEST dd_flag_noatime_forced_aio 00:08:18.111 ************************************ 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1727486472 )) 00:08:18.111 00:08:18.111 real 0m4.435s 00:08:18.111 user 0m2.769s 00:08:18.111 sys 0m0.420s 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:18.111 ************************************ 00:08:18.111 START TEST dd_flags_misc_forced_aio 00:08:18.111 ************************************ 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.111 01:21:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:18.370 [2024-09-28 01:21:14.082732] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:18.370 [2024-09-28 01:21:14.082945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62417 ] 00:08:18.370 [2024-09-28 01:21:14.253995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.629 [2024-09-28 01:21:14.423361] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.888 [2024-09-28 01:21:14.585117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.823  Copying: 512/512 [B] (average 500 kBps) 00:08:19.823 00:08:19.823 01:21:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ k7x61mfvos2omavuz9k3o50i66ywjlk0gyxykgmitgaw81fczl0vzohwhnmko2x6gwp1jlj9qrr617u3cvogp5f6s7320zsq9oxoq3iz26cwehp11eof4f9mwizfdl9h6o2ihoofksingdkxzmwi3a2drgmq8kuy06sgr2q6k5xuaaek7wo66y4xy3chc4t39c8s4w0b1fkv7efen1luursbq543zrptlc4qk6pr7mzj3k2npa499dlwmocl4cn93ek90meubp0d2ibsmprm82qu8lzchj1w9yl30kmiam7k1ogdg9x0lbgbn2m34r17mzc1gf3nbxhe7oue3mcnpmmdpipjuto35ujfnwwo8y4mv1fc76ln8xclq65ankifuo731i8put5dvcaba65ff7jz7vctnbor4zm7hdlwko65gfsh3q98sze9vwfuo3uuo73nug7s74aknb3shaq7bqg5plklhn44t4rregf6e932dy68mpqnlpsoujqd9dqj == \k\7\x\6\1\m\f\v\o\s\2\o\m\a\v\u\z\9\k\3\o\5\0\i\6\6\y\w\j\l\k\0\g\y\x\y\k\g\m\i\t\g\a\w\8\1\f\c\z\l\0\v\z\o\h\w\h\n\m\k\o\2\x\6\g\w\p\1\j\l\j\9\q\r\r\6\1\7\u\3\c\v\o\g\p\5\f\6\s\7\3\2\0\z\s\q\9\o\x\o\q\3\i\z\2\6\c\w\e\h\p\1\1\e\o\f\4\f\9\m\w\i\z\f\d\l\9\h\6\o\2\i\h\o\o\f\k\s\i\n\g\d\k\x\z\m\w\i\3\a\2\d\r\g\m\q\8\k\u\y\0\6\s\g\r\2\q\6\k\5\x\u\a\a\e\k\7\w\o\6\6\y\4\x\y\3\c\h\c\4\t\3\9\c\8\s\4\w\0\b\1\f\k\v\7\e\f\e\n\1\l\u\u\r\s\b\q\5\4\3\z\r\p\t\l\c\4\q\k\6\p\r\7\m\z\j\3\k\2\n\p\a\4\9\9\d\l\w\m\o\c\l\4\c\n\9\3\e\k\9\0\m\e\u\b\p\0\d\2\i\b\s\m\p\r\m\8\2\q\u\8\l\z\c\h\j\1\w\9\y\l\3\0\k\m\i\a\m\7\k\1\o\g\d\g\9\x\0\l\b\g\b\n\2\m\3\4\r\1\7\m\z\c\1\g\f\3\n\b\x\h\e\7\o\u\e\3\m\c\n\p\m\m\d\p\i\p\j\u\t\o\3\5\u\j\f\n\w\w\o\8\y\4\m\v\1\f\c\7\6\l\n\8\x\c\l\q\6\5\a\n\k\i\f\u\o\7\3\1\i\8\p\u\t\5\d\v\c\a\b\a\6\5\f\f\7\j\z\7\v\c\t\n\b\o\r\4\z\m\7\h\d\l\w\k\o\6\5\g\f\s\h\3\q\9\8\s\z\e\9\v\w\f\u\o\3\u\u\o\7\3\n\u\g\7\s\7\4\a\k\n\b\3\s\h\a\q\7\b\q\g\5\p\l\k\l\h\n\4\4\t\4\r\r\e\g\f\6\e\9\3\2\d\y\6\8\m\p\q\n\l\p\s\o\u\j\q\d\9\d\q\j ]] 00:08:19.823 01:21:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:19.823 01:21:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:20.082 [2024-09-28 01:21:15.825233] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:20.082 [2024-09-28 01:21:15.825412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62442 ] 00:08:20.082 [2024-09-28 01:21:15.987270] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.341 [2024-09-28 01:21:16.172026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.599 [2024-09-28 01:21:16.344053] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.536  Copying: 512/512 [B] (average 500 kBps) 00:08:21.536 00:08:21.795 01:21:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ k7x61mfvos2omavuz9k3o50i66ywjlk0gyxykgmitgaw81fczl0vzohwhnmko2x6gwp1jlj9qrr617u3cvogp5f6s7320zsq9oxoq3iz26cwehp11eof4f9mwizfdl9h6o2ihoofksingdkxzmwi3a2drgmq8kuy06sgr2q6k5xuaaek7wo66y4xy3chc4t39c8s4w0b1fkv7efen1luursbq543zrptlc4qk6pr7mzj3k2npa499dlwmocl4cn93ek90meubp0d2ibsmprm82qu8lzchj1w9yl30kmiam7k1ogdg9x0lbgbn2m34r17mzc1gf3nbxhe7oue3mcnpmmdpipjuto35ujfnwwo8y4mv1fc76ln8xclq65ankifuo731i8put5dvcaba65ff7jz7vctnbor4zm7hdlwko65gfsh3q98sze9vwfuo3uuo73nug7s74aknb3shaq7bqg5plklhn44t4rregf6e932dy68mpqnlpsoujqd9dqj == \k\7\x\6\1\m\f\v\o\s\2\o\m\a\v\u\z\9\k\3\o\5\0\i\6\6\y\w\j\l\k\0\g\y\x\y\k\g\m\i\t\g\a\w\8\1\f\c\z\l\0\v\z\o\h\w\h\n\m\k\o\2\x\6\g\w\p\1\j\l\j\9\q\r\r\6\1\7\u\3\c\v\o\g\p\5\f\6\s\7\3\2\0\z\s\q\9\o\x\o\q\3\i\z\2\6\c\w\e\h\p\1\1\e\o\f\4\f\9\m\w\i\z\f\d\l\9\h\6\o\2\i\h\o\o\f\k\s\i\n\g\d\k\x\z\m\w\i\3\a\2\d\r\g\m\q\8\k\u\y\0\6\s\g\r\2\q\6\k\5\x\u\a\a\e\k\7\w\o\6\6\y\4\x\y\3\c\h\c\4\t\3\9\c\8\s\4\w\0\b\1\f\k\v\7\e\f\e\n\1\l\u\u\r\s\b\q\5\4\3\z\r\p\t\l\c\4\q\k\6\p\r\7\m\z\j\3\k\2\n\p\a\4\9\9\d\l\w\m\o\c\l\4\c\n\9\3\e\k\9\0\m\e\u\b\p\0\d\2\i\b\s\m\p\r\m\8\2\q\u\8\l\z\c\h\j\1\w\9\y\l\3\0\k\m\i\a\m\7\k\1\o\g\d\g\9\x\0\l\b\g\b\n\2\m\3\4\r\1\7\m\z\c\1\g\f\3\n\b\x\h\e\7\o\u\e\3\m\c\n\p\m\m\d\p\i\p\j\u\t\o\3\5\u\j\f\n\w\w\o\8\y\4\m\v\1\f\c\7\6\l\n\8\x\c\l\q\6\5\a\n\k\i\f\u\o\7\3\1\i\8\p\u\t\5\d\v\c\a\b\a\6\5\f\f\7\j\z\7\v\c\t\n\b\o\r\4\z\m\7\h\d\l\w\k\o\6\5\g\f\s\h\3\q\9\8\s\z\e\9\v\w\f\u\o\3\u\u\o\7\3\n\u\g\7\s\7\4\a\k\n\b\3\s\h\a\q\7\b\q\g\5\p\l\k\l\h\n\4\4\t\4\r\r\e\g\f\6\e\9\3\2\d\y\6\8\m\p\q\n\l\p\s\o\u\j\q\d\9\d\q\j ]] 00:08:21.795 01:21:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:21.795 01:21:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:21.795 [2024-09-28 01:21:17.583095] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:21.795 [2024-09-28 01:21:17.583287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62467 ] 00:08:22.054 [2024-09-28 01:21:17.753155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.054 [2024-09-28 01:21:17.918540] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.313 [2024-09-28 01:21:18.068927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.248  Copying: 512/512 [B] (average 166 kBps) 00:08:23.248 00:08:23.248 01:21:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ k7x61mfvos2omavuz9k3o50i66ywjlk0gyxykgmitgaw81fczl0vzohwhnmko2x6gwp1jlj9qrr617u3cvogp5f6s7320zsq9oxoq3iz26cwehp11eof4f9mwizfdl9h6o2ihoofksingdkxzmwi3a2drgmq8kuy06sgr2q6k5xuaaek7wo66y4xy3chc4t39c8s4w0b1fkv7efen1luursbq543zrptlc4qk6pr7mzj3k2npa499dlwmocl4cn93ek90meubp0d2ibsmprm82qu8lzchj1w9yl30kmiam7k1ogdg9x0lbgbn2m34r17mzc1gf3nbxhe7oue3mcnpmmdpipjuto35ujfnwwo8y4mv1fc76ln8xclq65ankifuo731i8put5dvcaba65ff7jz7vctnbor4zm7hdlwko65gfsh3q98sze9vwfuo3uuo73nug7s74aknb3shaq7bqg5plklhn44t4rregf6e932dy68mpqnlpsoujqd9dqj == \k\7\x\6\1\m\f\v\o\s\2\o\m\a\v\u\z\9\k\3\o\5\0\i\6\6\y\w\j\l\k\0\g\y\x\y\k\g\m\i\t\g\a\w\8\1\f\c\z\l\0\v\z\o\h\w\h\n\m\k\o\2\x\6\g\w\p\1\j\l\j\9\q\r\r\6\1\7\u\3\c\v\o\g\p\5\f\6\s\7\3\2\0\z\s\q\9\o\x\o\q\3\i\z\2\6\c\w\e\h\p\1\1\e\o\f\4\f\9\m\w\i\z\f\d\l\9\h\6\o\2\i\h\o\o\f\k\s\i\n\g\d\k\x\z\m\w\i\3\a\2\d\r\g\m\q\8\k\u\y\0\6\s\g\r\2\q\6\k\5\x\u\a\a\e\k\7\w\o\6\6\y\4\x\y\3\c\h\c\4\t\3\9\c\8\s\4\w\0\b\1\f\k\v\7\e\f\e\n\1\l\u\u\r\s\b\q\5\4\3\z\r\p\t\l\c\4\q\k\6\p\r\7\m\z\j\3\k\2\n\p\a\4\9\9\d\l\w\m\o\c\l\4\c\n\9\3\e\k\9\0\m\e\u\b\p\0\d\2\i\b\s\m\p\r\m\8\2\q\u\8\l\z\c\h\j\1\w\9\y\l\3\0\k\m\i\a\m\7\k\1\o\g\d\g\9\x\0\l\b\g\b\n\2\m\3\4\r\1\7\m\z\c\1\g\f\3\n\b\x\h\e\7\o\u\e\3\m\c\n\p\m\m\d\p\i\p\j\u\t\o\3\5\u\j\f\n\w\w\o\8\y\4\m\v\1\f\c\7\6\l\n\8\x\c\l\q\6\5\a\n\k\i\f\u\o\7\3\1\i\8\p\u\t\5\d\v\c\a\b\a\6\5\f\f\7\j\z\7\v\c\t\n\b\o\r\4\z\m\7\h\d\l\w\k\o\6\5\g\f\s\h\3\q\9\8\s\z\e\9\v\w\f\u\o\3\u\u\o\7\3\n\u\g\7\s\7\4\a\k\n\b\3\s\h\a\q\7\b\q\g\5\p\l\k\l\h\n\4\4\t\4\r\r\e\g\f\6\e\9\3\2\d\y\6\8\m\p\q\n\l\p\s\o\u\j\q\d\9\d\q\j ]] 00:08:23.248 01:21:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:23.248 01:21:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:23.507 [2024-09-28 01:21:19.252513] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:23.507 [2024-09-28 01:21:19.252688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62481 ] 00:08:23.507 [2024-09-28 01:21:19.414957] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.766 [2024-09-28 01:21:19.576516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.024 [2024-09-28 01:21:19.731379] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.957  Copying: 512/512 [B] (average 250 kBps) 00:08:24.957 00:08:24.957 01:21:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ k7x61mfvos2omavuz9k3o50i66ywjlk0gyxykgmitgaw81fczl0vzohwhnmko2x6gwp1jlj9qrr617u3cvogp5f6s7320zsq9oxoq3iz26cwehp11eof4f9mwizfdl9h6o2ihoofksingdkxzmwi3a2drgmq8kuy06sgr2q6k5xuaaek7wo66y4xy3chc4t39c8s4w0b1fkv7efen1luursbq543zrptlc4qk6pr7mzj3k2npa499dlwmocl4cn93ek90meubp0d2ibsmprm82qu8lzchj1w9yl30kmiam7k1ogdg9x0lbgbn2m34r17mzc1gf3nbxhe7oue3mcnpmmdpipjuto35ujfnwwo8y4mv1fc76ln8xclq65ankifuo731i8put5dvcaba65ff7jz7vctnbor4zm7hdlwko65gfsh3q98sze9vwfuo3uuo73nug7s74aknb3shaq7bqg5plklhn44t4rregf6e932dy68mpqnlpsoujqd9dqj == \k\7\x\6\1\m\f\v\o\s\2\o\m\a\v\u\z\9\k\3\o\5\0\i\6\6\y\w\j\l\k\0\g\y\x\y\k\g\m\i\t\g\a\w\8\1\f\c\z\l\0\v\z\o\h\w\h\n\m\k\o\2\x\6\g\w\p\1\j\l\j\9\q\r\r\6\1\7\u\3\c\v\o\g\p\5\f\6\s\7\3\2\0\z\s\q\9\o\x\o\q\3\i\z\2\6\c\w\e\h\p\1\1\e\o\f\4\f\9\m\w\i\z\f\d\l\9\h\6\o\2\i\h\o\o\f\k\s\i\n\g\d\k\x\z\m\w\i\3\a\2\d\r\g\m\q\8\k\u\y\0\6\s\g\r\2\q\6\k\5\x\u\a\a\e\k\7\w\o\6\6\y\4\x\y\3\c\h\c\4\t\3\9\c\8\s\4\w\0\b\1\f\k\v\7\e\f\e\n\1\l\u\u\r\s\b\q\5\4\3\z\r\p\t\l\c\4\q\k\6\p\r\7\m\z\j\3\k\2\n\p\a\4\9\9\d\l\w\m\o\c\l\4\c\n\9\3\e\k\9\0\m\e\u\b\p\0\d\2\i\b\s\m\p\r\m\8\2\q\u\8\l\z\c\h\j\1\w\9\y\l\3\0\k\m\i\a\m\7\k\1\o\g\d\g\9\x\0\l\b\g\b\n\2\m\3\4\r\1\7\m\z\c\1\g\f\3\n\b\x\h\e\7\o\u\e\3\m\c\n\p\m\m\d\p\i\p\j\u\t\o\3\5\u\j\f\n\w\w\o\8\y\4\m\v\1\f\c\7\6\l\n\8\x\c\l\q\6\5\a\n\k\i\f\u\o\7\3\1\i\8\p\u\t\5\d\v\c\a\b\a\6\5\f\f\7\j\z\7\v\c\t\n\b\o\r\4\z\m\7\h\d\l\w\k\o\6\5\g\f\s\h\3\q\9\8\s\z\e\9\v\w\f\u\o\3\u\u\o\7\3\n\u\g\7\s\7\4\a\k\n\b\3\s\h\a\q\7\b\q\g\5\p\l\k\l\h\n\4\4\t\4\r\r\e\g\f\6\e\9\3\2\d\y\6\8\m\p\q\n\l\p\s\o\u\j\q\d\9\d\q\j ]] 00:08:24.957 01:21:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:24.957 01:21:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:24.957 01:21:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:24.957 01:21:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:24.957 01:21:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:24.957 01:21:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:25.215 [2024-09-28 01:21:20.982266] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:25.215 [2024-09-28 01:21:20.982491] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62506 ] 00:08:25.475 [2024-09-28 01:21:21.154558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.475 [2024-09-28 01:21:21.314616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.733 [2024-09-28 01:21:21.467120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.679  Copying: 512/512 [B] (average 500 kBps) 00:08:26.679 00:08:26.679 01:21:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f53fxu8c05gw4v9j7n437s0jm3pv5g2qlo52hdkx0byigr8yn9k9je9cwt7cftq306t652d3rrcvdzd0iq4c9ha2lx08xxbum85g83px9js6nl4xe395e8ly7rnebyk4efcnx0vtbba4hvqsb6t2x0zf6m4ithdg765t37qzrx3chiswgbx4bjqyf7n12zd0g72a6jm55d9p7onzctf7sfnv427aycc79k8tm2gw4l239gn9m6pip8gu913pjsx77ddpnl4deekti5lfvevd6qsb8241u6gtfme3kfpmps6lx4ky6e3rz63gvmit88u2nugjzc1isf9xil7om0k7g79h26jpcqmltl2xrh0iubet7bn1ynk5otnyxcfn5jux2kjejsyjla5seqcfalaztn8kog319av136gytpgu6o1a2tgz2pmyixl8ppus4xcqwav26on3hmnc2b36zqd9cjinov0iz9glx82l6a0xqebanmmgj0iej1l4i7apc9al == \f\5\3\f\x\u\8\c\0\5\g\w\4\v\9\j\7\n\4\3\7\s\0\j\m\3\p\v\5\g\2\q\l\o\5\2\h\d\k\x\0\b\y\i\g\r\8\y\n\9\k\9\j\e\9\c\w\t\7\c\f\t\q\3\0\6\t\6\5\2\d\3\r\r\c\v\d\z\d\0\i\q\4\c\9\h\a\2\l\x\0\8\x\x\b\u\m\8\5\g\8\3\p\x\9\j\s\6\n\l\4\x\e\3\9\5\e\8\l\y\7\r\n\e\b\y\k\4\e\f\c\n\x\0\v\t\b\b\a\4\h\v\q\s\b\6\t\2\x\0\z\f\6\m\4\i\t\h\d\g\7\6\5\t\3\7\q\z\r\x\3\c\h\i\s\w\g\b\x\4\b\j\q\y\f\7\n\1\2\z\d\0\g\7\2\a\6\j\m\5\5\d\9\p\7\o\n\z\c\t\f\7\s\f\n\v\4\2\7\a\y\c\c\7\9\k\8\t\m\2\g\w\4\l\2\3\9\g\n\9\m\6\p\i\p\8\g\u\9\1\3\p\j\s\x\7\7\d\d\p\n\l\4\d\e\e\k\t\i\5\l\f\v\e\v\d\6\q\s\b\8\2\4\1\u\6\g\t\f\m\e\3\k\f\p\m\p\s\6\l\x\4\k\y\6\e\3\r\z\6\3\g\v\m\i\t\8\8\u\2\n\u\g\j\z\c\1\i\s\f\9\x\i\l\7\o\m\0\k\7\g\7\9\h\2\6\j\p\c\q\m\l\t\l\2\x\r\h\0\i\u\b\e\t\7\b\n\1\y\n\k\5\o\t\n\y\x\c\f\n\5\j\u\x\2\k\j\e\j\s\y\j\l\a\5\s\e\q\c\f\a\l\a\z\t\n\8\k\o\g\3\1\9\a\v\1\3\6\g\y\t\p\g\u\6\o\1\a\2\t\g\z\2\p\m\y\i\x\l\8\p\p\u\s\4\x\c\q\w\a\v\2\6\o\n\3\h\m\n\c\2\b\3\6\z\q\d\9\c\j\i\n\o\v\0\i\z\9\g\l\x\8\2\l\6\a\0\x\q\e\b\a\n\m\m\g\j\0\i\e\j\1\l\4\i\7\a\p\c\9\a\l ]] 00:08:26.679 01:21:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:26.679 01:21:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:26.951 [2024-09-28 01:21:22.692545] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:26.951 [2024-09-28 01:21:22.692723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62530 ] 00:08:26.951 [2024-09-28 01:21:22.860788] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.209 [2024-09-28 01:21:23.021422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.469 [2024-09-28 01:21:23.186922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.405  Copying: 512/512 [B] (average 500 kBps) 00:08:28.405 00:08:28.405 01:21:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f53fxu8c05gw4v9j7n437s0jm3pv5g2qlo52hdkx0byigr8yn9k9je9cwt7cftq306t652d3rrcvdzd0iq4c9ha2lx08xxbum85g83px9js6nl4xe395e8ly7rnebyk4efcnx0vtbba4hvqsb6t2x0zf6m4ithdg765t37qzrx3chiswgbx4bjqyf7n12zd0g72a6jm55d9p7onzctf7sfnv427aycc79k8tm2gw4l239gn9m6pip8gu913pjsx77ddpnl4deekti5lfvevd6qsb8241u6gtfme3kfpmps6lx4ky6e3rz63gvmit88u2nugjzc1isf9xil7om0k7g79h26jpcqmltl2xrh0iubet7bn1ynk5otnyxcfn5jux2kjejsyjla5seqcfalaztn8kog319av136gytpgu6o1a2tgz2pmyixl8ppus4xcqwav26on3hmnc2b36zqd9cjinov0iz9glx82l6a0xqebanmmgj0iej1l4i7apc9al == \f\5\3\f\x\u\8\c\0\5\g\w\4\v\9\j\7\n\4\3\7\s\0\j\m\3\p\v\5\g\2\q\l\o\5\2\h\d\k\x\0\b\y\i\g\r\8\y\n\9\k\9\j\e\9\c\w\t\7\c\f\t\q\3\0\6\t\6\5\2\d\3\r\r\c\v\d\z\d\0\i\q\4\c\9\h\a\2\l\x\0\8\x\x\b\u\m\8\5\g\8\3\p\x\9\j\s\6\n\l\4\x\e\3\9\5\e\8\l\y\7\r\n\e\b\y\k\4\e\f\c\n\x\0\v\t\b\b\a\4\h\v\q\s\b\6\t\2\x\0\z\f\6\m\4\i\t\h\d\g\7\6\5\t\3\7\q\z\r\x\3\c\h\i\s\w\g\b\x\4\b\j\q\y\f\7\n\1\2\z\d\0\g\7\2\a\6\j\m\5\5\d\9\p\7\o\n\z\c\t\f\7\s\f\n\v\4\2\7\a\y\c\c\7\9\k\8\t\m\2\g\w\4\l\2\3\9\g\n\9\m\6\p\i\p\8\g\u\9\1\3\p\j\s\x\7\7\d\d\p\n\l\4\d\e\e\k\t\i\5\l\f\v\e\v\d\6\q\s\b\8\2\4\1\u\6\g\t\f\m\e\3\k\f\p\m\p\s\6\l\x\4\k\y\6\e\3\r\z\6\3\g\v\m\i\t\8\8\u\2\n\u\g\j\z\c\1\i\s\f\9\x\i\l\7\o\m\0\k\7\g\7\9\h\2\6\j\p\c\q\m\l\t\l\2\x\r\h\0\i\u\b\e\t\7\b\n\1\y\n\k\5\o\t\n\y\x\c\f\n\5\j\u\x\2\k\j\e\j\s\y\j\l\a\5\s\e\q\c\f\a\l\a\z\t\n\8\k\o\g\3\1\9\a\v\1\3\6\g\y\t\p\g\u\6\o\1\a\2\t\g\z\2\p\m\y\i\x\l\8\p\p\u\s\4\x\c\q\w\a\v\2\6\o\n\3\h\m\n\c\2\b\3\6\z\q\d\9\c\j\i\n\o\v\0\i\z\9\g\l\x\8\2\l\6\a\0\x\q\e\b\a\n\m\m\g\j\0\i\e\j\1\l\4\i\7\a\p\c\9\a\l ]] 00:08:28.405 01:21:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:28.405 01:21:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:28.664 [2024-09-28 01:21:24.351399] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:28.664 [2024-09-28 01:21:24.351593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62545 ] 00:08:28.664 [2024-09-28 01:21:24.514072] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.922 [2024-09-28 01:21:24.665991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.922 [2024-09-28 01:21:24.818414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.117  Copying: 512/512 [B] (average 166 kBps) 00:08:30.117 00:08:30.117 01:21:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f53fxu8c05gw4v9j7n437s0jm3pv5g2qlo52hdkx0byigr8yn9k9je9cwt7cftq306t652d3rrcvdzd0iq4c9ha2lx08xxbum85g83px9js6nl4xe395e8ly7rnebyk4efcnx0vtbba4hvqsb6t2x0zf6m4ithdg765t37qzrx3chiswgbx4bjqyf7n12zd0g72a6jm55d9p7onzctf7sfnv427aycc79k8tm2gw4l239gn9m6pip8gu913pjsx77ddpnl4deekti5lfvevd6qsb8241u6gtfme3kfpmps6lx4ky6e3rz63gvmit88u2nugjzc1isf9xil7om0k7g79h26jpcqmltl2xrh0iubet7bn1ynk5otnyxcfn5jux2kjejsyjla5seqcfalaztn8kog319av136gytpgu6o1a2tgz2pmyixl8ppus4xcqwav26on3hmnc2b36zqd9cjinov0iz9glx82l6a0xqebanmmgj0iej1l4i7apc9al == \f\5\3\f\x\u\8\c\0\5\g\w\4\v\9\j\7\n\4\3\7\s\0\j\m\3\p\v\5\g\2\q\l\o\5\2\h\d\k\x\0\b\y\i\g\r\8\y\n\9\k\9\j\e\9\c\w\t\7\c\f\t\q\3\0\6\t\6\5\2\d\3\r\r\c\v\d\z\d\0\i\q\4\c\9\h\a\2\l\x\0\8\x\x\b\u\m\8\5\g\8\3\p\x\9\j\s\6\n\l\4\x\e\3\9\5\e\8\l\y\7\r\n\e\b\y\k\4\e\f\c\n\x\0\v\t\b\b\a\4\h\v\q\s\b\6\t\2\x\0\z\f\6\m\4\i\t\h\d\g\7\6\5\t\3\7\q\z\r\x\3\c\h\i\s\w\g\b\x\4\b\j\q\y\f\7\n\1\2\z\d\0\g\7\2\a\6\j\m\5\5\d\9\p\7\o\n\z\c\t\f\7\s\f\n\v\4\2\7\a\y\c\c\7\9\k\8\t\m\2\g\w\4\l\2\3\9\g\n\9\m\6\p\i\p\8\g\u\9\1\3\p\j\s\x\7\7\d\d\p\n\l\4\d\e\e\k\t\i\5\l\f\v\e\v\d\6\q\s\b\8\2\4\1\u\6\g\t\f\m\e\3\k\f\p\m\p\s\6\l\x\4\k\y\6\e\3\r\z\6\3\g\v\m\i\t\8\8\u\2\n\u\g\j\z\c\1\i\s\f\9\x\i\l\7\o\m\0\k\7\g\7\9\h\2\6\j\p\c\q\m\l\t\l\2\x\r\h\0\i\u\b\e\t\7\b\n\1\y\n\k\5\o\t\n\y\x\c\f\n\5\j\u\x\2\k\j\e\j\s\y\j\l\a\5\s\e\q\c\f\a\l\a\z\t\n\8\k\o\g\3\1\9\a\v\1\3\6\g\y\t\p\g\u\6\o\1\a\2\t\g\z\2\p\m\y\i\x\l\8\p\p\u\s\4\x\c\q\w\a\v\2\6\o\n\3\h\m\n\c\2\b\3\6\z\q\d\9\c\j\i\n\o\v\0\i\z\9\g\l\x\8\2\l\6\a\0\x\q\e\b\a\n\m\m\g\j\0\i\e\j\1\l\4\i\7\a\p\c\9\a\l ]] 00:08:30.117 01:21:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:30.117 01:21:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:30.117 [2024-09-28 01:21:26.007024] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:30.117 [2024-09-28 01:21:26.007251] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62570 ] 00:08:30.377 [2024-09-28 01:21:26.176855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.635 [2024-09-28 01:21:26.336928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.635 [2024-09-28 01:21:26.497850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.832  Copying: 512/512 [B] (average 500 kBps) 00:08:31.832 00:08:31.833 01:21:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f53fxu8c05gw4v9j7n437s0jm3pv5g2qlo52hdkx0byigr8yn9k9je9cwt7cftq306t652d3rrcvdzd0iq4c9ha2lx08xxbum85g83px9js6nl4xe395e8ly7rnebyk4efcnx0vtbba4hvqsb6t2x0zf6m4ithdg765t37qzrx3chiswgbx4bjqyf7n12zd0g72a6jm55d9p7onzctf7sfnv427aycc79k8tm2gw4l239gn9m6pip8gu913pjsx77ddpnl4deekti5lfvevd6qsb8241u6gtfme3kfpmps6lx4ky6e3rz63gvmit88u2nugjzc1isf9xil7om0k7g79h26jpcqmltl2xrh0iubet7bn1ynk5otnyxcfn5jux2kjejsyjla5seqcfalaztn8kog319av136gytpgu6o1a2tgz2pmyixl8ppus4xcqwav26on3hmnc2b36zqd9cjinov0iz9glx82l6a0xqebanmmgj0iej1l4i7apc9al == \f\5\3\f\x\u\8\c\0\5\g\w\4\v\9\j\7\n\4\3\7\s\0\j\m\3\p\v\5\g\2\q\l\o\5\2\h\d\k\x\0\b\y\i\g\r\8\y\n\9\k\9\j\e\9\c\w\t\7\c\f\t\q\3\0\6\t\6\5\2\d\3\r\r\c\v\d\z\d\0\i\q\4\c\9\h\a\2\l\x\0\8\x\x\b\u\m\8\5\g\8\3\p\x\9\j\s\6\n\l\4\x\e\3\9\5\e\8\l\y\7\r\n\e\b\y\k\4\e\f\c\n\x\0\v\t\b\b\a\4\h\v\q\s\b\6\t\2\x\0\z\f\6\m\4\i\t\h\d\g\7\6\5\t\3\7\q\z\r\x\3\c\h\i\s\w\g\b\x\4\b\j\q\y\f\7\n\1\2\z\d\0\g\7\2\a\6\j\m\5\5\d\9\p\7\o\n\z\c\t\f\7\s\f\n\v\4\2\7\a\y\c\c\7\9\k\8\t\m\2\g\w\4\l\2\3\9\g\n\9\m\6\p\i\p\8\g\u\9\1\3\p\j\s\x\7\7\d\d\p\n\l\4\d\e\e\k\t\i\5\l\f\v\e\v\d\6\q\s\b\8\2\4\1\u\6\g\t\f\m\e\3\k\f\p\m\p\s\6\l\x\4\k\y\6\e\3\r\z\6\3\g\v\m\i\t\8\8\u\2\n\u\g\j\z\c\1\i\s\f\9\x\i\l\7\o\m\0\k\7\g\7\9\h\2\6\j\p\c\q\m\l\t\l\2\x\r\h\0\i\u\b\e\t\7\b\n\1\y\n\k\5\o\t\n\y\x\c\f\n\5\j\u\x\2\k\j\e\j\s\y\j\l\a\5\s\e\q\c\f\a\l\a\z\t\n\8\k\o\g\3\1\9\a\v\1\3\6\g\y\t\p\g\u\6\o\1\a\2\t\g\z\2\p\m\y\i\x\l\8\p\p\u\s\4\x\c\q\w\a\v\2\6\o\n\3\h\m\n\c\2\b\3\6\z\q\d\9\c\j\i\n\o\v\0\i\z\9\g\l\x\8\2\l\6\a\0\x\q\e\b\a\n\m\m\g\j\0\i\e\j\1\l\4\i\7\a\p\c\9\a\l ]] 00:08:31.833 00:08:31.833 real 0m13.597s 00:08:31.833 user 0m11.039s 00:08:31.833 sys 0m1.542s 00:08:31.833 01:21:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.833 01:21:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:31.833 ************************************ 00:08:31.833 END TEST dd_flags_misc_forced_aio 00:08:31.833 ************************************ 00:08:31.833 01:21:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:31.833 01:21:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:31.833 01:21:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:31.833 ************************************ 00:08:31.833 END TEST spdk_dd_posix 00:08:31.833 ************************************ 00:08:31.833 00:08:31.833 real 0m56.796s 00:08:31.833 user 0m44.362s 00:08:31.833 sys 0m14.062s 00:08:31.833 01:21:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.833 01:21:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:31.833 01:21:27 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:31.833 01:21:27 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:31.833 01:21:27 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.833 01:21:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:31.833 ************************************ 00:08:31.833 START TEST spdk_dd_malloc 00:08:31.833 ************************************ 00:08:31.833 01:21:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:31.833 * Looking for test storage... 00:08:31.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:31.833 01:21:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:31.833 01:21:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:31.833 01:21:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:32.092 01:21:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:32.092 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.092 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.092 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.092 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:32.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.093 --rc genhtml_branch_coverage=1 00:08:32.093 --rc genhtml_function_coverage=1 00:08:32.093 --rc genhtml_legend=1 00:08:32.093 --rc geninfo_all_blocks=1 00:08:32.093 --rc geninfo_unexecuted_blocks=1 00:08:32.093 00:08:32.093 ' 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:32.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.093 --rc genhtml_branch_coverage=1 00:08:32.093 --rc genhtml_function_coverage=1 00:08:32.093 --rc genhtml_legend=1 00:08:32.093 --rc geninfo_all_blocks=1 00:08:32.093 --rc geninfo_unexecuted_blocks=1 00:08:32.093 00:08:32.093 ' 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:32.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.093 --rc genhtml_branch_coverage=1 00:08:32.093 --rc genhtml_function_coverage=1 00:08:32.093 --rc genhtml_legend=1 00:08:32.093 --rc geninfo_all_blocks=1 00:08:32.093 --rc geninfo_unexecuted_blocks=1 00:08:32.093 00:08:32.093 ' 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:32.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.093 --rc genhtml_branch_coverage=1 00:08:32.093 --rc genhtml_function_coverage=1 00:08:32.093 --rc genhtml_legend=1 00:08:32.093 --rc geninfo_all_blocks=1 00:08:32.093 --rc geninfo_unexecuted_blocks=1 00:08:32.093 00:08:32.093 ' 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:32.093 ************************************ 00:08:32.093 START TEST dd_malloc_copy 00:08:32.093 ************************************ 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:32.093 01:21:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:32.093 { 00:08:32.093 "subsystems": [ 00:08:32.093 { 00:08:32.093 "subsystem": "bdev", 00:08:32.093 "config": [ 00:08:32.093 { 00:08:32.093 "params": { 00:08:32.093 "block_size": 512, 00:08:32.093 "num_blocks": 1048576, 00:08:32.093 "name": "malloc0" 00:08:32.093 }, 00:08:32.093 "method": "bdev_malloc_create" 00:08:32.093 }, 00:08:32.093 { 00:08:32.093 "params": { 00:08:32.093 "block_size": 512, 00:08:32.093 "num_blocks": 1048576, 00:08:32.093 "name": "malloc1" 00:08:32.093 }, 00:08:32.093 "method": "bdev_malloc_create" 00:08:32.093 }, 00:08:32.093 { 00:08:32.093 "method": "bdev_wait_for_examine" 00:08:32.093 } 00:08:32.093 ] 00:08:32.093 } 00:08:32.093 ] 00:08:32.093 } 00:08:32.093 [2024-09-28 01:21:27.992330] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:32.093 [2024-09-28 01:21:27.992737] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62659 ] 00:08:32.353 [2024-09-28 01:21:28.160620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.611 [2024-09-28 01:21:28.311396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.611 [2024-09-28 01:21:28.463652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.688  Copying: 189/512 [MB] (189 MBps) Copying: 379/512 [MB] (190 MBps) Copying: 512/512 [MB] (average 190 MBps) 00:08:39.688 00:08:39.688 01:21:35 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:39.688 01:21:35 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:39.688 01:21:35 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:39.688 01:21:35 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:39.688 { 00:08:39.688 "subsystems": [ 00:08:39.688 { 00:08:39.688 "subsystem": "bdev", 00:08:39.688 "config": [ 00:08:39.688 { 00:08:39.688 "params": { 00:08:39.688 "block_size": 512, 00:08:39.688 "num_blocks": 1048576, 00:08:39.688 "name": "malloc0" 00:08:39.688 }, 00:08:39.688 "method": "bdev_malloc_create" 00:08:39.688 }, 00:08:39.688 { 00:08:39.688 "params": { 00:08:39.688 "block_size": 512, 00:08:39.688 "num_blocks": 1048576, 00:08:39.688 "name": "malloc1" 00:08:39.688 }, 00:08:39.688 "method": "bdev_malloc_create" 00:08:39.688 }, 00:08:39.688 { 00:08:39.688 "method": "bdev_wait_for_examine" 00:08:39.688 } 00:08:39.688 ] 00:08:39.688 } 00:08:39.688 ] 00:08:39.688 } 00:08:39.688 [2024-09-28 01:21:35.149408] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:39.688 [2024-09-28 01:21:35.149599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62746 ] 00:08:39.688 [2024-09-28 01:21:35.305564] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.689 [2024-09-28 01:21:35.465158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.689 [2024-09-28 01:21:35.617587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.789  Copying: 194/512 [MB] (194 MBps) Copying: 390/512 [MB] (195 MBps) Copying: 512/512 [MB] (average 192 MBps) 00:08:46.789 00:08:46.789 00:08:46.789 real 0m14.264s 00:08:46.789 user 0m13.269s 00:08:46.789 sys 0m0.798s 00:08:46.789 ************************************ 00:08:46.789 END TEST dd_malloc_copy 00:08:46.789 ************************************ 00:08:46.789 01:21:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.789 01:21:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:46.789 ************************************ 00:08:46.789 END TEST spdk_dd_malloc 00:08:46.789 ************************************ 00:08:46.789 00:08:46.789 real 0m14.518s 00:08:46.789 user 0m13.413s 00:08:46.789 sys 0m0.905s 00:08:46.789 01:21:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.789 01:21:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:46.789 01:21:42 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:46.789 01:21:42 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:46.789 01:21:42 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.789 01:21:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:46.789 ************************************ 00:08:46.789 START TEST spdk_dd_bdev_to_bdev 00:08:46.789 ************************************ 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:46.789 * Looking for test storage... 00:08:46.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lcov --version 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:46.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.789 --rc genhtml_branch_coverage=1 00:08:46.789 --rc genhtml_function_coverage=1 00:08:46.789 --rc genhtml_legend=1 00:08:46.789 --rc geninfo_all_blocks=1 00:08:46.789 --rc geninfo_unexecuted_blocks=1 00:08:46.789 00:08:46.789 ' 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:46.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.789 --rc genhtml_branch_coverage=1 00:08:46.789 --rc genhtml_function_coverage=1 00:08:46.789 --rc genhtml_legend=1 00:08:46.789 --rc geninfo_all_blocks=1 00:08:46.789 --rc geninfo_unexecuted_blocks=1 00:08:46.789 00:08:46.789 ' 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:46.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.789 --rc genhtml_branch_coverage=1 00:08:46.789 --rc genhtml_function_coverage=1 00:08:46.789 --rc genhtml_legend=1 00:08:46.789 --rc geninfo_all_blocks=1 00:08:46.789 --rc geninfo_unexecuted_blocks=1 00:08:46.789 00:08:46.789 ' 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:46.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.789 --rc genhtml_branch_coverage=1 00:08:46.789 --rc genhtml_function_coverage=1 00:08:46.789 --rc genhtml_legend=1 00:08:46.789 --rc geninfo_all_blocks=1 00:08:46.789 --rc geninfo_unexecuted_blocks=1 00:08:46.789 00:08:46.789 ' 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:46.789 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:46.790 ************************************ 00:08:46.790 START TEST dd_inflate_file 00:08:46.790 ************************************ 00:08:46.790 01:21:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:46.790 [2024-09-28 01:21:42.554135] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:46.790 [2024-09-28 01:21:42.554546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62898 ] 00:08:47.049 [2024-09-28 01:21:42.726367] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.049 [2024-09-28 01:21:42.892769] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.307 [2024-09-28 01:21:43.048369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:48.243  Copying: 64/64 [MB] (average 1641 MBps) 00:08:48.243 00:08:48.243 ************************************ 00:08:48.243 END TEST dd_inflate_file 00:08:48.243 ************************************ 00:08:48.243 00:08:48.243 real 0m1.699s 00:08:48.243 user 0m1.388s 00:08:48.243 sys 0m0.874s 00:08:48.243 01:21:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.243 01:21:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:48.501 01:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:48.501 01:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:48.501 01:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:48.501 01:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:48.501 01:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:48.501 01:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:48.501 01:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:48.501 01:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.501 01:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:48.501 ************************************ 00:08:48.501 START TEST dd_copy_to_out_bdev 00:08:48.501 ************************************ 00:08:48.501 01:21:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:48.501 { 00:08:48.501 "subsystems": [ 00:08:48.501 { 00:08:48.501 "subsystem": "bdev", 00:08:48.501 "config": [ 00:08:48.501 { 00:08:48.501 "params": { 00:08:48.501 "trtype": "pcie", 00:08:48.501 "traddr": "0000:00:10.0", 00:08:48.501 "name": "Nvme0" 00:08:48.501 }, 00:08:48.501 "method": "bdev_nvme_attach_controller" 00:08:48.501 }, 00:08:48.501 { 00:08:48.501 "params": { 00:08:48.501 "trtype": "pcie", 00:08:48.501 "traddr": "0000:00:11.0", 00:08:48.501 "name": "Nvme1" 00:08:48.501 }, 00:08:48.501 "method": "bdev_nvme_attach_controller" 00:08:48.501 }, 00:08:48.501 { 00:08:48.501 "method": "bdev_wait_for_examine" 00:08:48.501 } 00:08:48.501 ] 00:08:48.501 } 00:08:48.501 ] 00:08:48.501 } 00:08:48.501 [2024-09-28 01:21:44.310481] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:48.501 [2024-09-28 01:21:44.310658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62951 ] 00:08:48.760 [2024-09-28 01:21:44.482034] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.760 [2024-09-28 01:21:44.631808] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.019 [2024-09-28 01:21:44.789302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.613  Copying: 44/64 [MB] (44 MBps) Copying: 64/64 [MB] (average 44 MBps) 00:08:51.613 00:08:51.613 ************************************ 00:08:51.613 END TEST dd_copy_to_out_bdev 00:08:51.613 ************************************ 00:08:51.613 00:08:51.613 real 0m3.281s 00:08:51.613 user 0m2.984s 00:08:51.613 sys 0m2.342s 00:08:51.613 01:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.613 01:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:51.613 01:21:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:51.613 01:21:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:51.613 01:21:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:51.613 01:21:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.613 01:21:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:51.613 ************************************ 00:08:51.613 START TEST dd_offset_magic 00:08:51.613 ************************************ 00:08:51.613 01:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:08:51.613 01:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:51.613 01:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:51.613 01:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:51.613 01:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:51.613 01:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:51.613 01:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:51.613 01:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:51.613 01:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:51.872 { 00:08:51.872 "subsystems": [ 00:08:51.872 { 00:08:51.872 "subsystem": "bdev", 00:08:51.872 "config": [ 00:08:51.872 { 00:08:51.872 "params": { 00:08:51.872 "trtype": "pcie", 00:08:51.872 "traddr": "0000:00:10.0", 00:08:51.872 "name": "Nvme0" 00:08:51.872 }, 00:08:51.872 "method": "bdev_nvme_attach_controller" 00:08:51.872 }, 00:08:51.872 { 00:08:51.872 "params": { 00:08:51.872 "trtype": "pcie", 00:08:51.872 "traddr": "0000:00:11.0", 00:08:51.872 "name": "Nvme1" 00:08:51.872 }, 00:08:51.872 "method": "bdev_nvme_attach_controller" 00:08:51.872 }, 00:08:51.872 { 00:08:51.872 "method": "bdev_wait_for_examine" 00:08:51.872 } 00:08:51.872 ] 00:08:51.872 } 00:08:51.872 ] 00:08:51.872 } 00:08:51.872 [2024-09-28 01:21:47.649612] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:51.872 [2024-09-28 01:21:47.649791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63006 ] 00:08:52.131 [2024-09-28 01:21:47.818177] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.132 [2024-09-28 01:21:47.983609] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.391 [2024-09-28 01:21:48.136691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.588  Copying: 65/65 [MB] (average 928 MBps) 00:08:53.588 00:08:53.588 01:21:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:53.588 01:21:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:53.588 01:21:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:53.588 01:21:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:53.588 { 00:08:53.588 "subsystems": [ 00:08:53.588 { 00:08:53.588 "subsystem": "bdev", 00:08:53.588 "config": [ 00:08:53.588 { 00:08:53.588 "params": { 00:08:53.588 "trtype": "pcie", 00:08:53.588 "traddr": "0000:00:10.0", 00:08:53.588 "name": "Nvme0" 00:08:53.588 }, 00:08:53.588 "method": "bdev_nvme_attach_controller" 00:08:53.588 }, 00:08:53.588 { 00:08:53.588 "params": { 00:08:53.588 "trtype": "pcie", 00:08:53.588 "traddr": "0000:00:11.0", 00:08:53.588 "name": "Nvme1" 00:08:53.588 }, 00:08:53.588 "method": "bdev_nvme_attach_controller" 00:08:53.588 }, 00:08:53.588 { 00:08:53.588 "method": "bdev_wait_for_examine" 00:08:53.588 } 00:08:53.588 ] 00:08:53.588 } 00:08:53.588 ] 00:08:53.588 } 00:08:53.588 [2024-09-28 01:21:49.497148] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:53.588 [2024-09-28 01:21:49.497325] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63038 ] 00:08:53.847 [2024-09-28 01:21:49.666522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.106 [2024-09-28 01:21:49.818452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.106 [2024-09-28 01:21:49.976976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:55.303  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:55.303 00:08:55.303 01:21:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:55.303 01:21:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:55.303 01:21:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:55.303 01:21:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:55.303 01:21:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:55.303 01:21:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:55.303 01:21:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:55.561 { 00:08:55.561 "subsystems": [ 00:08:55.561 { 00:08:55.561 "subsystem": "bdev", 00:08:55.561 "config": [ 00:08:55.561 { 00:08:55.561 "params": { 00:08:55.561 "trtype": "pcie", 00:08:55.561 "traddr": "0000:00:10.0", 00:08:55.561 "name": "Nvme0" 00:08:55.561 }, 00:08:55.561 "method": "bdev_nvme_attach_controller" 00:08:55.561 }, 00:08:55.561 { 00:08:55.561 "params": { 00:08:55.561 "trtype": "pcie", 00:08:55.561 "traddr": "0000:00:11.0", 00:08:55.561 "name": "Nvme1" 00:08:55.561 }, 00:08:55.561 "method": "bdev_nvme_attach_controller" 00:08:55.561 }, 00:08:55.561 { 00:08:55.561 "method": "bdev_wait_for_examine" 00:08:55.561 } 00:08:55.561 ] 00:08:55.561 } 00:08:55.561 ] 00:08:55.561 } 00:08:55.561 [2024-09-28 01:21:51.338156] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:55.561 [2024-09-28 01:21:51.338320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63067 ] 00:08:55.821 [2024-09-28 01:21:51.504420] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.821 [2024-09-28 01:21:51.652803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.080 [2024-09-28 01:21:51.799958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.275  Copying: 65/65 [MB] (average 1015 MBps) 00:08:57.275 00:08:57.275 01:21:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:57.275 01:21:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:57.275 01:21:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:57.275 01:21:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:57.275 { 00:08:57.275 "subsystems": [ 00:08:57.275 { 00:08:57.275 "subsystem": "bdev", 00:08:57.275 "config": [ 00:08:57.275 { 00:08:57.275 "params": { 00:08:57.275 "trtype": "pcie", 00:08:57.275 "traddr": "0000:00:10.0", 00:08:57.275 "name": "Nvme0" 00:08:57.275 }, 00:08:57.275 "method": "bdev_nvme_attach_controller" 00:08:57.275 }, 00:08:57.275 { 00:08:57.275 "params": { 00:08:57.275 "trtype": "pcie", 00:08:57.275 "traddr": "0000:00:11.0", 00:08:57.275 "name": "Nvme1" 00:08:57.275 }, 00:08:57.275 "method": "bdev_nvme_attach_controller" 00:08:57.275 }, 00:08:57.275 { 00:08:57.275 "method": "bdev_wait_for_examine" 00:08:57.275 } 00:08:57.275 ] 00:08:57.275 } 00:08:57.275 ] 00:08:57.275 } 00:08:57.275 [2024-09-28 01:21:53.038642] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:57.275 [2024-09-28 01:21:53.038820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63093 ] 00:08:57.535 [2024-09-28 01:21:53.210650] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.535 [2024-09-28 01:21:53.367888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.794 [2024-09-28 01:21:53.535108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.987  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:58.987 00:08:58.987 01:21:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:58.987 01:21:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:58.987 00:08:58.987 real 0m7.292s 00:08:58.987 user 0m6.186s 00:08:58.987 sys 0m2.214s 00:08:58.987 01:21:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.987 01:21:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:58.987 ************************************ 00:08:58.987 END TEST dd_offset_magic 00:08:58.987 ************************************ 00:08:58.987 01:21:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:58.987 01:21:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:58.987 01:21:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:58.987 01:21:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:58.987 01:21:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:58.987 01:21:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:58.987 01:21:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:58.987 01:21:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:58.987 01:21:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:58.987 01:21:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:58.987 01:21:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:59.245 { 00:08:59.245 "subsystems": [ 00:08:59.245 { 00:08:59.245 "subsystem": "bdev", 00:08:59.245 "config": [ 00:08:59.245 { 00:08:59.245 "params": { 00:08:59.245 "trtype": "pcie", 00:08:59.245 "traddr": "0000:00:10.0", 00:08:59.245 "name": "Nvme0" 00:08:59.245 }, 00:08:59.245 "method": "bdev_nvme_attach_controller" 00:08:59.245 }, 00:08:59.245 { 00:08:59.245 "params": { 00:08:59.245 "trtype": "pcie", 00:08:59.245 "traddr": "0000:00:11.0", 00:08:59.245 "name": "Nvme1" 00:08:59.245 }, 00:08:59.245 "method": "bdev_nvme_attach_controller" 00:08:59.245 }, 00:08:59.245 { 00:08:59.245 "method": "bdev_wait_for_examine" 00:08:59.245 } 00:08:59.245 ] 00:08:59.245 } 00:08:59.245 ] 00:08:59.245 } 00:08:59.245 [2024-09-28 01:21:54.981635] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:59.245 [2024-09-28 01:21:54.981813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63143 ] 00:08:59.245 [2024-09-28 01:21:55.150743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.503 [2024-09-28 01:21:55.317169] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.761 [2024-09-28 01:21:55.480629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:00.955  Copying: 5120/5120 [kB] (average 1666 MBps) 00:09:00.955 00:09:00.955 01:21:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:09:00.955 01:21:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:09:00.955 01:21:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:00.955 01:21:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:00.955 01:21:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:00.955 01:21:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:00.955 01:21:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:09:00.955 01:21:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:00.955 01:21:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:00.955 01:21:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:00.955 { 00:09:00.955 "subsystems": [ 00:09:00.955 { 00:09:00.955 "subsystem": "bdev", 00:09:00.955 "config": [ 00:09:00.955 { 00:09:00.955 "params": { 00:09:00.955 "trtype": "pcie", 00:09:00.955 "traddr": "0000:00:10.0", 00:09:00.955 "name": "Nvme0" 00:09:00.955 }, 00:09:00.955 "method": "bdev_nvme_attach_controller" 00:09:00.955 }, 00:09:00.955 { 00:09:00.955 "params": { 00:09:00.955 "trtype": "pcie", 00:09:00.955 "traddr": "0000:00:11.0", 00:09:00.955 "name": "Nvme1" 00:09:00.955 }, 00:09:00.955 "method": "bdev_nvme_attach_controller" 00:09:00.955 }, 00:09:00.955 { 00:09:00.955 "method": "bdev_wait_for_examine" 00:09:00.955 } 00:09:00.955 ] 00:09:00.955 } 00:09:00.955 ] 00:09:00.955 } 00:09:00.955 [2024-09-28 01:21:56.690827] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:00.955 [2024-09-28 01:21:56.691036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63171 ] 00:09:00.955 [2024-09-28 01:21:56.859017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.214 [2024-09-28 01:21:57.029178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.473 [2024-09-28 01:21:57.185259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:02.669  Copying: 5120/5120 [kB] (average 833 MBps) 00:09:02.669 00:09:02.669 01:21:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:09:02.669 00:09:02.669 real 0m16.263s 00:09:02.669 user 0m13.785s 00:09:02.669 sys 0m7.211s 00:09:02.669 ************************************ 00:09:02.669 END TEST spdk_dd_bdev_to_bdev 00:09:02.669 ************************************ 00:09:02.669 01:21:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.669 01:21:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:02.669 01:21:58 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:09:02.669 01:21:58 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:02.669 01:21:58 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:02.669 01:21:58 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.669 01:21:58 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:02.669 ************************************ 00:09:02.669 START TEST spdk_dd_uring 00:09:02.669 ************************************ 00:09:02.669 01:21:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:02.928 * Looking for test storage... 00:09:02.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lcov --version 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:02.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.928 --rc genhtml_branch_coverage=1 00:09:02.928 --rc genhtml_function_coverage=1 00:09:02.928 --rc genhtml_legend=1 00:09:02.928 --rc geninfo_all_blocks=1 00:09:02.928 --rc geninfo_unexecuted_blocks=1 00:09:02.928 00:09:02.928 ' 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:02.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.928 --rc genhtml_branch_coverage=1 00:09:02.928 --rc genhtml_function_coverage=1 00:09:02.928 --rc genhtml_legend=1 00:09:02.928 --rc geninfo_all_blocks=1 00:09:02.928 --rc geninfo_unexecuted_blocks=1 00:09:02.928 00:09:02.928 ' 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:02.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.928 --rc genhtml_branch_coverage=1 00:09:02.928 --rc genhtml_function_coverage=1 00:09:02.928 --rc genhtml_legend=1 00:09:02.928 --rc geninfo_all_blocks=1 00:09:02.928 --rc geninfo_unexecuted_blocks=1 00:09:02.928 00:09:02.928 ' 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:02.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.928 --rc genhtml_branch_coverage=1 00:09:02.928 --rc genhtml_function_coverage=1 00:09:02.928 --rc genhtml_legend=1 00:09:02.928 --rc geninfo_all_blocks=1 00:09:02.928 --rc geninfo_unexecuted_blocks=1 00:09:02.928 00:09:02.928 ' 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.928 01:21:58 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:02.929 ************************************ 00:09:02.929 START TEST dd_uring_copy 00:09:02.929 ************************************ 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=6pizrac27yd6u9xl6hoj49t787u6gmh8e08o99hlwjbwoq09dig0fw2ggm39pmo8be5vzm4ilc8ee721r5hilf28nh6e81yf1roonkx5bpsszpxz0gwqypvr7kz4jio32h46ovwzwjbgm65qskqodrsoh71fn6kzvpwe53yfsgi5pbtd4ea9tzjq3ggh39bgunmom8lc6j7fyo3af69ed06cs6du16fx2arjmioynj8pwuvfba18rnzhfede2kuspbnok6zdgu7hsrxts83d9ki3wbym3c4uv18i8d33yldt6o6du342wkr4brfpeoe3siprlpnt9w6msydn6zons7h93hkg9bum129rf34us5ilbygk949f34ahshe8hy6tm50906litmnvbfb9jkbui9vqb47i0wthuegf0fld6e0a5oiawiasu4mcmifbl7r1m5w7o37e4u2nrmiifgwfz5j00bbur88go3iknc765a731adjsk2dghxyyzcjnlds9de6etbb0fno8rpxyzr4egbhol35r7z1szzvhd9g3erw3bna97vn54tchgt5dpoa7md6taegakv3l3b6k84vycyvlxpsqr7itzs6162faxipo7relvo1wmogwaz3fph5w2gknfev0tiy93toube6of3r2arsuudboukyqrtmfy389tvwyx8o168j0e6lw1dexmrwiwswniawhvtd1t7m0apbd8xurr68bc5s9u7x2q2kenagdwp7qsldmr7rl22mufvmtvfyo2ydi8ndq7qe8pv4j2t5w0k8415yvk3y5k97mprqw2wu4w32ontic5h73edze2gv5lbybnccnyu7qniznv7vtcbd8v8vf0fq4z2uasczd4277h22yxxqr23qhxtec60p7ua2vy1wmqm7fhowsl6o6un9wfxnf20evqpkrib6jnv8c0idnkgovpb46kzxsedvb28h8rqbnyn36dnd9f5mo2c8q3rdu2l97bkjysqzxnykcoqdpok7g7jp 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 6pizrac27yd6u9xl6hoj49t787u6gmh8e08o99hlwjbwoq09dig0fw2ggm39pmo8be5vzm4ilc8ee721r5hilf28nh6e81yf1roonkx5bpsszpxz0gwqypvr7kz4jio32h46ovwzwjbgm65qskqodrsoh71fn6kzvpwe53yfsgi5pbtd4ea9tzjq3ggh39bgunmom8lc6j7fyo3af69ed06cs6du16fx2arjmioynj8pwuvfba18rnzhfede2kuspbnok6zdgu7hsrxts83d9ki3wbym3c4uv18i8d33yldt6o6du342wkr4brfpeoe3siprlpnt9w6msydn6zons7h93hkg9bum129rf34us5ilbygk949f34ahshe8hy6tm50906litmnvbfb9jkbui9vqb47i0wthuegf0fld6e0a5oiawiasu4mcmifbl7r1m5w7o37e4u2nrmiifgwfz5j00bbur88go3iknc765a731adjsk2dghxyyzcjnlds9de6etbb0fno8rpxyzr4egbhol35r7z1szzvhd9g3erw3bna97vn54tchgt5dpoa7md6taegakv3l3b6k84vycyvlxpsqr7itzs6162faxipo7relvo1wmogwaz3fph5w2gknfev0tiy93toube6of3r2arsuudboukyqrtmfy389tvwyx8o168j0e6lw1dexmrwiwswniawhvtd1t7m0apbd8xurr68bc5s9u7x2q2kenagdwp7qsldmr7rl22mufvmtvfyo2ydi8ndq7qe8pv4j2t5w0k8415yvk3y5k97mprqw2wu4w32ontic5h73edze2gv5lbybnccnyu7qniznv7vtcbd8v8vf0fq4z2uasczd4277h22yxxqr23qhxtec60p7ua2vy1wmqm7fhowsl6o6un9wfxnf20evqpkrib6jnv8c0idnkgovpb46kzxsedvb28h8rqbnyn36dnd9f5mo2c8q3rdu2l97bkjysqzxnykcoqdpok7g7jp 00:09:02.929 01:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:09:03.187 [2024-09-28 01:21:58.867081] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:03.187 [2024-09-28 01:21:58.867244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63261 ] 00:09:03.187 [2024-09-28 01:21:59.027395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.446 [2024-09-28 01:21:59.198529] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.446 [2024-09-28 01:21:59.365360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:06.721  Copying: 511/511 [MB] (average 1296 MBps) 00:09:06.721 00:09:06.721 01:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:09:06.721 01:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:09:06.721 01:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:06.721 01:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:06.721 { 00:09:06.721 "subsystems": [ 00:09:06.721 { 00:09:06.721 "subsystem": "bdev", 00:09:06.721 "config": [ 00:09:06.721 { 00:09:06.721 "params": { 00:09:06.721 "block_size": 512, 00:09:06.721 "num_blocks": 1048576, 00:09:06.721 "name": "malloc0" 00:09:06.721 }, 00:09:06.721 "method": "bdev_malloc_create" 00:09:06.721 }, 00:09:06.721 { 00:09:06.721 "params": { 00:09:06.721 "filename": "/dev/zram1", 00:09:06.721 "name": "uring0" 00:09:06.721 }, 00:09:06.721 "method": "bdev_uring_create" 00:09:06.721 }, 00:09:06.721 { 00:09:06.721 "method": "bdev_wait_for_examine" 00:09:06.721 } 00:09:06.721 ] 00:09:06.721 } 00:09:06.721 ] 00:09:06.721 } 00:09:06.721 [2024-09-28 01:22:02.626128] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:06.721 [2024-09-28 01:22:02.626311] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63311 ] 00:09:06.980 [2024-09-28 01:22:02.798382] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.238 [2024-09-28 01:22:02.971066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.238 [2024-09-28 01:22:03.142851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.865  Copying: 199/512 [MB] (199 MBps) Copying: 404/512 [MB] (204 MBps) Copying: 512/512 [MB] (average 203 MBps) 00:09:12.865 00:09:12.865 01:22:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:12.865 01:22:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:09:12.865 01:22:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:12.865 01:22:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:12.865 { 00:09:12.865 "subsystems": [ 00:09:12.865 { 00:09:12.865 "subsystem": "bdev", 00:09:12.865 "config": [ 00:09:12.865 { 00:09:12.865 "params": { 00:09:12.865 "block_size": 512, 00:09:12.865 "num_blocks": 1048576, 00:09:12.865 "name": "malloc0" 00:09:12.865 }, 00:09:12.865 "method": "bdev_malloc_create" 00:09:12.865 }, 00:09:12.865 { 00:09:12.865 "params": { 00:09:12.865 "filename": "/dev/zram1", 00:09:12.865 "name": "uring0" 00:09:12.865 }, 00:09:12.865 "method": "bdev_uring_create" 00:09:12.865 }, 00:09:12.865 { 00:09:12.865 "method": "bdev_wait_for_examine" 00:09:12.865 } 00:09:12.865 ] 00:09:12.865 } 00:09:12.865 ] 00:09:12.865 } 00:09:12.865 [2024-09-28 01:22:08.349885] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:12.865 [2024-09-28 01:22:08.350065] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63389 ] 00:09:12.865 [2024-09-28 01:22:08.520342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.865 [2024-09-28 01:22:08.680652] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.124 [2024-09-28 01:22:08.840212] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:19.539  Copying: 146/512 [MB] (146 MBps) Copying: 277/512 [MB] (131 MBps) Copying: 420/512 [MB] (142 MBps) Copying: 512/512 [MB] (average 141 MBps) 00:09:19.539 00:09:19.539 01:22:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:19.539 01:22:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 6pizrac27yd6u9xl6hoj49t787u6gmh8e08o99hlwjbwoq09dig0fw2ggm39pmo8be5vzm4ilc8ee721r5hilf28nh6e81yf1roonkx5bpsszpxz0gwqypvr7kz4jio32h46ovwzwjbgm65qskqodrsoh71fn6kzvpwe53yfsgi5pbtd4ea9tzjq3ggh39bgunmom8lc6j7fyo3af69ed06cs6du16fx2arjmioynj8pwuvfba18rnzhfede2kuspbnok6zdgu7hsrxts83d9ki3wbym3c4uv18i8d33yldt6o6du342wkr4brfpeoe3siprlpnt9w6msydn6zons7h93hkg9bum129rf34us5ilbygk949f34ahshe8hy6tm50906litmnvbfb9jkbui9vqb47i0wthuegf0fld6e0a5oiawiasu4mcmifbl7r1m5w7o37e4u2nrmiifgwfz5j00bbur88go3iknc765a731adjsk2dghxyyzcjnlds9de6etbb0fno8rpxyzr4egbhol35r7z1szzvhd9g3erw3bna97vn54tchgt5dpoa7md6taegakv3l3b6k84vycyvlxpsqr7itzs6162faxipo7relvo1wmogwaz3fph5w2gknfev0tiy93toube6of3r2arsuudboukyqrtmfy389tvwyx8o168j0e6lw1dexmrwiwswniawhvtd1t7m0apbd8xurr68bc5s9u7x2q2kenagdwp7qsldmr7rl22mufvmtvfyo2ydi8ndq7qe8pv4j2t5w0k8415yvk3y5k97mprqw2wu4w32ontic5h73edze2gv5lbybnccnyu7qniznv7vtcbd8v8vf0fq4z2uasczd4277h22yxxqr23qhxtec60p7ua2vy1wmqm7fhowsl6o6un9wfxnf20evqpkrib6jnv8c0idnkgovpb46kzxsedvb28h8rqbnyn36dnd9f5mo2c8q3rdu2l97bkjysqzxnykcoqdpok7g7jp == \6\p\i\z\r\a\c\2\7\y\d\6\u\9\x\l\6\h\o\j\4\9\t\7\8\7\u\6\g\m\h\8\e\0\8\o\9\9\h\l\w\j\b\w\o\q\0\9\d\i\g\0\f\w\2\g\g\m\3\9\p\m\o\8\b\e\5\v\z\m\4\i\l\c\8\e\e\7\2\1\r\5\h\i\l\f\2\8\n\h\6\e\8\1\y\f\1\r\o\o\n\k\x\5\b\p\s\s\z\p\x\z\0\g\w\q\y\p\v\r\7\k\z\4\j\i\o\3\2\h\4\6\o\v\w\z\w\j\b\g\m\6\5\q\s\k\q\o\d\r\s\o\h\7\1\f\n\6\k\z\v\p\w\e\5\3\y\f\s\g\i\5\p\b\t\d\4\e\a\9\t\z\j\q\3\g\g\h\3\9\b\g\u\n\m\o\m\8\l\c\6\j\7\f\y\o\3\a\f\6\9\e\d\0\6\c\s\6\d\u\1\6\f\x\2\a\r\j\m\i\o\y\n\j\8\p\w\u\v\f\b\a\1\8\r\n\z\h\f\e\d\e\2\k\u\s\p\b\n\o\k\6\z\d\g\u\7\h\s\r\x\t\s\8\3\d\9\k\i\3\w\b\y\m\3\c\4\u\v\1\8\i\8\d\3\3\y\l\d\t\6\o\6\d\u\3\4\2\w\k\r\4\b\r\f\p\e\o\e\3\s\i\p\r\l\p\n\t\9\w\6\m\s\y\d\n\6\z\o\n\s\7\h\9\3\h\k\g\9\b\u\m\1\2\9\r\f\3\4\u\s\5\i\l\b\y\g\k\9\4\9\f\3\4\a\h\s\h\e\8\h\y\6\t\m\5\0\9\0\6\l\i\t\m\n\v\b\f\b\9\j\k\b\u\i\9\v\q\b\4\7\i\0\w\t\h\u\e\g\f\0\f\l\d\6\e\0\a\5\o\i\a\w\i\a\s\u\4\m\c\m\i\f\b\l\7\r\1\m\5\w\7\o\3\7\e\4\u\2\n\r\m\i\i\f\g\w\f\z\5\j\0\0\b\b\u\r\8\8\g\o\3\i\k\n\c\7\6\5\a\7\3\1\a\d\j\s\k\2\d\g\h\x\y\y\z\c\j\n\l\d\s\9\d\e\6\e\t\b\b\0\f\n\o\8\r\p\x\y\z\r\4\e\g\b\h\o\l\3\5\r\7\z\1\s\z\z\v\h\d\9\g\3\e\r\w\3\b\n\a\9\7\v\n\5\4\t\c\h\g\t\5\d\p\o\a\7\m\d\6\t\a\e\g\a\k\v\3\l\3\b\6\k\8\4\v\y\c\y\v\l\x\p\s\q\r\7\i\t\z\s\6\1\6\2\f\a\x\i\p\o\7\r\e\l\v\o\1\w\m\o\g\w\a\z\3\f\p\h\5\w\2\g\k\n\f\e\v\0\t\i\y\9\3\t\o\u\b\e\6\o\f\3\r\2\a\r\s\u\u\d\b\o\u\k\y\q\r\t\m\f\y\3\8\9\t\v\w\y\x\8\o\1\6\8\j\0\e\6\l\w\1\d\e\x\m\r\w\i\w\s\w\n\i\a\w\h\v\t\d\1\t\7\m\0\a\p\b\d\8\x\u\r\r\6\8\b\c\5\s\9\u\7\x\2\q\2\k\e\n\a\g\d\w\p\7\q\s\l\d\m\r\7\r\l\2\2\m\u\f\v\m\t\v\f\y\o\2\y\d\i\8\n\d\q\7\q\e\8\p\v\4\j\2\t\5\w\0\k\8\4\1\5\y\v\k\3\y\5\k\9\7\m\p\r\q\w\2\w\u\4\w\3\2\o\n\t\i\c\5\h\7\3\e\d\z\e\2\g\v\5\l\b\y\b\n\c\c\n\y\u\7\q\n\i\z\n\v\7\v\t\c\b\d\8\v\8\v\f\0\f\q\4\z\2\u\a\s\c\z\d\4\2\7\7\h\2\2\y\x\x\q\r\2\3\q\h\x\t\e\c\6\0\p\7\u\a\2\v\y\1\w\m\q\m\7\f\h\o\w\s\l\6\o\6\u\n\9\w\f\x\n\f\2\0\e\v\q\p\k\r\i\b\6\j\n\v\8\c\0\i\d\n\k\g\o\v\p\b\4\6\k\z\x\s\e\d\v\b\2\8\h\8\r\q\b\n\y\n\3\6\d\n\d\9\f\5\m\o\2\c\8\q\3\r\d\u\2\l\9\7\b\k\j\y\s\q\z\x\n\y\k\c\o\q\d\p\o\k\7\g\7\j\p ]] 00:09:19.539 01:22:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:19.539 01:22:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 6pizrac27yd6u9xl6hoj49t787u6gmh8e08o99hlwjbwoq09dig0fw2ggm39pmo8be5vzm4ilc8ee721r5hilf28nh6e81yf1roonkx5bpsszpxz0gwqypvr7kz4jio32h46ovwzwjbgm65qskqodrsoh71fn6kzvpwe53yfsgi5pbtd4ea9tzjq3ggh39bgunmom8lc6j7fyo3af69ed06cs6du16fx2arjmioynj8pwuvfba18rnzhfede2kuspbnok6zdgu7hsrxts83d9ki3wbym3c4uv18i8d33yldt6o6du342wkr4brfpeoe3siprlpnt9w6msydn6zons7h93hkg9bum129rf34us5ilbygk949f34ahshe8hy6tm50906litmnvbfb9jkbui9vqb47i0wthuegf0fld6e0a5oiawiasu4mcmifbl7r1m5w7o37e4u2nrmiifgwfz5j00bbur88go3iknc765a731adjsk2dghxyyzcjnlds9de6etbb0fno8rpxyzr4egbhol35r7z1szzvhd9g3erw3bna97vn54tchgt5dpoa7md6taegakv3l3b6k84vycyvlxpsqr7itzs6162faxipo7relvo1wmogwaz3fph5w2gknfev0tiy93toube6of3r2arsuudboukyqrtmfy389tvwyx8o168j0e6lw1dexmrwiwswniawhvtd1t7m0apbd8xurr68bc5s9u7x2q2kenagdwp7qsldmr7rl22mufvmtvfyo2ydi8ndq7qe8pv4j2t5w0k8415yvk3y5k97mprqw2wu4w32ontic5h73edze2gv5lbybnccnyu7qniznv7vtcbd8v8vf0fq4z2uasczd4277h22yxxqr23qhxtec60p7ua2vy1wmqm7fhowsl6o6un9wfxnf20evqpkrib6jnv8c0idnkgovpb46kzxsedvb28h8rqbnyn36dnd9f5mo2c8q3rdu2l97bkjysqzxnykcoqdpok7g7jp == \6\p\i\z\r\a\c\2\7\y\d\6\u\9\x\l\6\h\o\j\4\9\t\7\8\7\u\6\g\m\h\8\e\0\8\o\9\9\h\l\w\j\b\w\o\q\0\9\d\i\g\0\f\w\2\g\g\m\3\9\p\m\o\8\b\e\5\v\z\m\4\i\l\c\8\e\e\7\2\1\r\5\h\i\l\f\2\8\n\h\6\e\8\1\y\f\1\r\o\o\n\k\x\5\b\p\s\s\z\p\x\z\0\g\w\q\y\p\v\r\7\k\z\4\j\i\o\3\2\h\4\6\o\v\w\z\w\j\b\g\m\6\5\q\s\k\q\o\d\r\s\o\h\7\1\f\n\6\k\z\v\p\w\e\5\3\y\f\s\g\i\5\p\b\t\d\4\e\a\9\t\z\j\q\3\g\g\h\3\9\b\g\u\n\m\o\m\8\l\c\6\j\7\f\y\o\3\a\f\6\9\e\d\0\6\c\s\6\d\u\1\6\f\x\2\a\r\j\m\i\o\y\n\j\8\p\w\u\v\f\b\a\1\8\r\n\z\h\f\e\d\e\2\k\u\s\p\b\n\o\k\6\z\d\g\u\7\h\s\r\x\t\s\8\3\d\9\k\i\3\w\b\y\m\3\c\4\u\v\1\8\i\8\d\3\3\y\l\d\t\6\o\6\d\u\3\4\2\w\k\r\4\b\r\f\p\e\o\e\3\s\i\p\r\l\p\n\t\9\w\6\m\s\y\d\n\6\z\o\n\s\7\h\9\3\h\k\g\9\b\u\m\1\2\9\r\f\3\4\u\s\5\i\l\b\y\g\k\9\4\9\f\3\4\a\h\s\h\e\8\h\y\6\t\m\5\0\9\0\6\l\i\t\m\n\v\b\f\b\9\j\k\b\u\i\9\v\q\b\4\7\i\0\w\t\h\u\e\g\f\0\f\l\d\6\e\0\a\5\o\i\a\w\i\a\s\u\4\m\c\m\i\f\b\l\7\r\1\m\5\w\7\o\3\7\e\4\u\2\n\r\m\i\i\f\g\w\f\z\5\j\0\0\b\b\u\r\8\8\g\o\3\i\k\n\c\7\6\5\a\7\3\1\a\d\j\s\k\2\d\g\h\x\y\y\z\c\j\n\l\d\s\9\d\e\6\e\t\b\b\0\f\n\o\8\r\p\x\y\z\r\4\e\g\b\h\o\l\3\5\r\7\z\1\s\z\z\v\h\d\9\g\3\e\r\w\3\b\n\a\9\7\v\n\5\4\t\c\h\g\t\5\d\p\o\a\7\m\d\6\t\a\e\g\a\k\v\3\l\3\b\6\k\8\4\v\y\c\y\v\l\x\p\s\q\r\7\i\t\z\s\6\1\6\2\f\a\x\i\p\o\7\r\e\l\v\o\1\w\m\o\g\w\a\z\3\f\p\h\5\w\2\g\k\n\f\e\v\0\t\i\y\9\3\t\o\u\b\e\6\o\f\3\r\2\a\r\s\u\u\d\b\o\u\k\y\q\r\t\m\f\y\3\8\9\t\v\w\y\x\8\o\1\6\8\j\0\e\6\l\w\1\d\e\x\m\r\w\i\w\s\w\n\i\a\w\h\v\t\d\1\t\7\m\0\a\p\b\d\8\x\u\r\r\6\8\b\c\5\s\9\u\7\x\2\q\2\k\e\n\a\g\d\w\p\7\q\s\l\d\m\r\7\r\l\2\2\m\u\f\v\m\t\v\f\y\o\2\y\d\i\8\n\d\q\7\q\e\8\p\v\4\j\2\t\5\w\0\k\8\4\1\5\y\v\k\3\y\5\k\9\7\m\p\r\q\w\2\w\u\4\w\3\2\o\n\t\i\c\5\h\7\3\e\d\z\e\2\g\v\5\l\b\y\b\n\c\c\n\y\u\7\q\n\i\z\n\v\7\v\t\c\b\d\8\v\8\v\f\0\f\q\4\z\2\u\a\s\c\z\d\4\2\7\7\h\2\2\y\x\x\q\r\2\3\q\h\x\t\e\c\6\0\p\7\u\a\2\v\y\1\w\m\q\m\7\f\h\o\w\s\l\6\o\6\u\n\9\w\f\x\n\f\2\0\e\v\q\p\k\r\i\b\6\j\n\v\8\c\0\i\d\n\k\g\o\v\p\b\4\6\k\z\x\s\e\d\v\b\2\8\h\8\r\q\b\n\y\n\3\6\d\n\d\9\f\5\m\o\2\c\8\q\3\r\d\u\2\l\9\7\b\k\j\y\s\q\z\x\n\y\k\c\o\q\d\p\o\k\7\g\7\j\p ]] 00:09:19.539 01:22:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:19.539 01:22:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:09:19.539 01:22:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:19.539 01:22:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:19.539 01:22:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:19.539 { 00:09:19.539 "subsystems": [ 00:09:19.539 { 00:09:19.539 "subsystem": "bdev", 00:09:19.539 "config": [ 00:09:19.539 { 00:09:19.539 "params": { 00:09:19.539 "block_size": 512, 00:09:19.539 "num_blocks": 1048576, 00:09:19.539 "name": "malloc0" 00:09:19.539 }, 00:09:19.539 "method": "bdev_malloc_create" 00:09:19.539 }, 00:09:19.539 { 00:09:19.539 "params": { 00:09:19.539 "filename": "/dev/zram1", 00:09:19.539 "name": "uring0" 00:09:19.539 }, 00:09:19.539 "method": "bdev_uring_create" 00:09:19.539 }, 00:09:19.539 { 00:09:19.539 "method": "bdev_wait_for_examine" 00:09:19.539 } 00:09:19.539 ] 00:09:19.539 } 00:09:19.539 ] 00:09:19.539 } 00:09:19.539 [2024-09-28 01:22:15.399609] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:19.539 [2024-09-28 01:22:15.399804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63497 ] 00:09:19.798 [2024-09-28 01:22:15.572088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.057 [2024-09-28 01:22:15.749772] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.057 [2024-09-28 01:22:15.930866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:26.078  Copying: 141/512 [MB] (141 MBps) Copying: 280/512 [MB] (138 MBps) Copying: 425/512 [MB] (144 MBps) Copying: 512/512 [MB] (average 141 MBps) 00:09:26.078 00:09:26.338 01:22:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:26.338 01:22:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:26.338 01:22:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:26.338 01:22:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:26.338 01:22:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:26.338 01:22:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:09:26.338 01:22:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:26.338 01:22:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:26.338 { 00:09:26.338 "subsystems": [ 00:09:26.338 { 00:09:26.338 "subsystem": "bdev", 00:09:26.338 "config": [ 00:09:26.338 { 00:09:26.338 "params": { 00:09:26.338 "block_size": 512, 00:09:26.338 "num_blocks": 1048576, 00:09:26.338 "name": "malloc0" 00:09:26.338 }, 00:09:26.338 "method": "bdev_malloc_create" 00:09:26.338 }, 00:09:26.338 { 00:09:26.338 "params": { 00:09:26.338 "filename": "/dev/zram1", 00:09:26.338 "name": "uring0" 00:09:26.338 }, 00:09:26.338 "method": "bdev_uring_create" 00:09:26.338 }, 00:09:26.338 { 00:09:26.338 "params": { 00:09:26.338 "name": "uring0" 00:09:26.338 }, 00:09:26.338 "method": "bdev_uring_delete" 00:09:26.338 }, 00:09:26.338 { 00:09:26.338 "method": "bdev_wait_for_examine" 00:09:26.338 } 00:09:26.338 ] 00:09:26.338 } 00:09:26.338 ] 00:09:26.338 } 00:09:26.338 [2024-09-28 01:22:22.137775] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:26.338 [2024-09-28 01:22:22.137968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63582 ] 00:09:26.597 [2024-09-28 01:22:22.309231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.597 [2024-09-28 01:22:22.458429] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.856 [2024-09-28 01:22:22.603771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:29.336  Copying: 0/0 [B] (average 0 Bps) 00:09:29.336 00:09:29.336 01:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:29.336 01:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:09:29.336 01:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:09:29.336 01:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:09:29.336 01:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:29.336 01:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:29.336 01:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:29.336 01:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.336 01:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.336 01:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.336 01:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.336 01:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.336 01:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.336 01:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.336 01:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:29.336 01:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:29.336 { 00:09:29.336 "subsystems": [ 00:09:29.336 { 00:09:29.336 "subsystem": "bdev", 00:09:29.336 "config": [ 00:09:29.336 { 00:09:29.336 "params": { 00:09:29.336 "block_size": 512, 00:09:29.336 "num_blocks": 1048576, 00:09:29.336 "name": "malloc0" 00:09:29.336 }, 00:09:29.336 "method": "bdev_malloc_create" 00:09:29.336 }, 00:09:29.336 { 00:09:29.336 "params": { 00:09:29.336 "filename": "/dev/zram1", 00:09:29.336 "name": "uring0" 00:09:29.336 }, 00:09:29.336 "method": "bdev_uring_create" 00:09:29.336 }, 00:09:29.336 { 00:09:29.336 "params": { 00:09:29.336 "name": "uring0" 00:09:29.336 }, 00:09:29.336 "method": "bdev_uring_delete" 00:09:29.336 }, 00:09:29.336 { 00:09:29.336 "method": "bdev_wait_for_examine" 00:09:29.336 } 00:09:29.336 ] 00:09:29.336 } 00:09:29.336 ] 00:09:29.336 } 00:09:29.336 [2024-09-28 01:22:25.237429] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:29.336 [2024-09-28 01:22:25.237588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63634 ] 00:09:29.595 [2024-09-28 01:22:25.404723] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.854 [2024-09-28 01:22:25.576535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.854 [2024-09-28 01:22:25.739442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:30.428 [2024-09-28 01:22:26.298315] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:30.428 [2024-09-28 01:22:26.298384] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:30.428 [2024-09-28 01:22:26.298422] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:09:30.428 [2024-09-28 01:22:26.298440] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:32.333 [2024-09-28 01:22:27.964199] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:32.591 01:22:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:09:32.591 01:22:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:32.592 01:22:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:09:32.592 01:22:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:09:32.592 01:22:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:09:32.592 01:22:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:32.592 01:22:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:32.592 01:22:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:09:32.592 01:22:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:09:32.592 01:22:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:09:32.592 01:22:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:09:32.592 01:22:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:32.850 00:09:32.851 real 0m29.847s 00:09:32.851 user 0m24.551s 00:09:32.851 sys 0m15.988s 00:09:32.851 01:22:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.851 01:22:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:32.851 ************************************ 00:09:32.851 END TEST dd_uring_copy 00:09:32.851 ************************************ 00:09:32.851 00:09:32.851 real 0m30.091s 00:09:32.851 user 0m24.692s 00:09:32.851 sys 0m16.095s 00:09:32.851 01:22:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.851 ************************************ 00:09:32.851 END TEST spdk_dd_uring 00:09:32.851 ************************************ 00:09:32.851 01:22:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:32.851 01:22:28 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:32.851 01:22:28 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:32.851 01:22:28 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.851 01:22:28 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:32.851 ************************************ 00:09:32.851 START TEST spdk_dd_sparse 00:09:32.851 ************************************ 00:09:32.851 01:22:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:32.851 * Looking for test storage... 00:09:32.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:32.851 01:22:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:32.851 01:22:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lcov --version 00:09:32.851 01:22:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:09:33.110 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:33.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.111 --rc genhtml_branch_coverage=1 00:09:33.111 --rc genhtml_function_coverage=1 00:09:33.111 --rc genhtml_legend=1 00:09:33.111 --rc geninfo_all_blocks=1 00:09:33.111 --rc geninfo_unexecuted_blocks=1 00:09:33.111 00:09:33.111 ' 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:33.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.111 --rc genhtml_branch_coverage=1 00:09:33.111 --rc genhtml_function_coverage=1 00:09:33.111 --rc genhtml_legend=1 00:09:33.111 --rc geninfo_all_blocks=1 00:09:33.111 --rc geninfo_unexecuted_blocks=1 00:09:33.111 00:09:33.111 ' 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:33.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.111 --rc genhtml_branch_coverage=1 00:09:33.111 --rc genhtml_function_coverage=1 00:09:33.111 --rc genhtml_legend=1 00:09:33.111 --rc geninfo_all_blocks=1 00:09:33.111 --rc geninfo_unexecuted_blocks=1 00:09:33.111 00:09:33.111 ' 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:33.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.111 --rc genhtml_branch_coverage=1 00:09:33.111 --rc genhtml_function_coverage=1 00:09:33.111 --rc genhtml_legend=1 00:09:33.111 --rc geninfo_all_blocks=1 00:09:33.111 --rc geninfo_unexecuted_blocks=1 00:09:33.111 00:09:33.111 ' 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:33.111 1+0 records in 00:09:33.111 1+0 records out 00:09:33.111 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00663624 s, 632 MB/s 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:33.111 1+0 records in 00:09:33.111 1+0 records out 00:09:33.111 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00626484 s, 669 MB/s 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:33.111 1+0 records in 00:09:33.111 1+0 records out 00:09:33.111 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00624458 s, 672 MB/s 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:33.111 ************************************ 00:09:33.111 START TEST dd_sparse_file_to_file 00:09:33.111 ************************************ 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:33.111 01:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:33.111 { 00:09:33.111 "subsystems": [ 00:09:33.111 { 00:09:33.111 "subsystem": "bdev", 00:09:33.111 "config": [ 00:09:33.111 { 00:09:33.111 "params": { 00:09:33.111 "block_size": 4096, 00:09:33.111 "filename": "dd_sparse_aio_disk", 00:09:33.111 "name": "dd_aio" 00:09:33.111 }, 00:09:33.111 "method": "bdev_aio_create" 00:09:33.111 }, 00:09:33.111 { 00:09:33.111 "params": { 00:09:33.111 "lvs_name": "dd_lvstore", 00:09:33.111 "bdev_name": "dd_aio" 00:09:33.111 }, 00:09:33.111 "method": "bdev_lvol_create_lvstore" 00:09:33.111 }, 00:09:33.111 { 00:09:33.111 "method": "bdev_wait_for_examine" 00:09:33.111 } 00:09:33.111 ] 00:09:33.111 } 00:09:33.111 ] 00:09:33.111 } 00:09:33.111 [2024-09-28 01:22:29.029404] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:33.111 [2024-09-28 01:22:29.029575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63751 ] 00:09:33.370 [2024-09-28 01:22:29.183803] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.629 [2024-09-28 01:22:29.349996] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.629 [2024-09-28 01:22:29.506278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:34.821  Copying: 12/36 [MB] (average 1090 MBps) 00:09:34.821 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:35.080 00:09:35.080 real 0m1.848s 00:09:35.080 user 0m1.535s 00:09:35.080 sys 0m0.914s 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:35.080 ************************************ 00:09:35.080 END TEST dd_sparse_file_to_file 00:09:35.080 ************************************ 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:35.080 ************************************ 00:09:35.080 START TEST dd_sparse_file_to_bdev 00:09:35.080 ************************************ 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:35.080 01:22:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:35.080 { 00:09:35.080 "subsystems": [ 00:09:35.080 { 00:09:35.080 "subsystem": "bdev", 00:09:35.080 "config": [ 00:09:35.080 { 00:09:35.080 "params": { 00:09:35.080 "block_size": 4096, 00:09:35.080 "filename": "dd_sparse_aio_disk", 00:09:35.080 "name": "dd_aio" 00:09:35.080 }, 00:09:35.080 "method": "bdev_aio_create" 00:09:35.080 }, 00:09:35.080 { 00:09:35.080 "params": { 00:09:35.080 "lvs_name": "dd_lvstore", 00:09:35.080 "lvol_name": "dd_lvol", 00:09:35.080 "size_in_mib": 36, 00:09:35.080 "thin_provision": true 00:09:35.080 }, 00:09:35.080 "method": "bdev_lvol_create" 00:09:35.080 }, 00:09:35.080 { 00:09:35.080 "method": "bdev_wait_for_examine" 00:09:35.080 } 00:09:35.080 ] 00:09:35.080 } 00:09:35.080 ] 00:09:35.080 } 00:09:35.080 [2024-09-28 01:22:30.949805] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:35.080 [2024-09-28 01:22:30.949965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63811 ] 00:09:35.339 [2024-09-28 01:22:31.119120] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.597 [2024-09-28 01:22:31.277607] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.597 [2024-09-28 01:22:31.441003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.811  Copying: 12/36 [MB] (average 571 MBps) 00:09:36.811 00:09:36.811 ************************************ 00:09:36.811 END TEST dd_sparse_file_to_bdev 00:09:36.811 ************************************ 00:09:36.811 00:09:36.811 real 0m1.845s 00:09:36.811 user 0m1.553s 00:09:36.811 sys 0m0.915s 00:09:36.811 01:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.811 01:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:36.811 01:22:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:36.811 01:22:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:36.811 01:22:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.811 01:22:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:36.811 ************************************ 00:09:36.811 START TEST dd_sparse_bdev_to_file 00:09:36.811 ************************************ 00:09:36.811 01:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:09:36.811 01:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:36.811 01:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:36.811 01:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:36.811 01:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:36.811 01:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:36.811 01:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:09:36.811 01:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:36.811 01:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:37.069 { 00:09:37.069 "subsystems": [ 00:09:37.069 { 00:09:37.069 "subsystem": "bdev", 00:09:37.070 "config": [ 00:09:37.070 { 00:09:37.070 "params": { 00:09:37.070 "block_size": 4096, 00:09:37.070 "filename": "dd_sparse_aio_disk", 00:09:37.070 "name": "dd_aio" 00:09:37.070 }, 00:09:37.070 "method": "bdev_aio_create" 00:09:37.070 }, 00:09:37.070 { 00:09:37.070 "method": "bdev_wait_for_examine" 00:09:37.070 } 00:09:37.070 ] 00:09:37.070 } 00:09:37.070 ] 00:09:37.070 } 00:09:37.070 [2024-09-28 01:22:32.849062] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:37.070 [2024-09-28 01:22:32.849492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63861 ] 00:09:37.329 [2024-09-28 01:22:33.021545] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.329 [2024-09-28 01:22:33.192084] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.588 [2024-09-28 01:22:33.345291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:38.966  Copying: 12/36 [MB] (average 923 MBps) 00:09:38.966 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:38.966 00:09:38.966 real 0m1.892s 00:09:38.966 user 0m1.575s 00:09:38.966 sys 0m0.943s 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.966 ************************************ 00:09:38.966 END TEST dd_sparse_bdev_to_file 00:09:38.966 ************************************ 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:09:38.966 00:09:38.966 real 0m6.004s 00:09:38.966 user 0m4.837s 00:09:38.966 sys 0m3.007s 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.966 01:22:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:38.966 ************************************ 00:09:38.966 END TEST spdk_dd_sparse 00:09:38.966 ************************************ 00:09:38.966 01:22:34 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:38.966 01:22:34 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:38.966 01:22:34 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.966 01:22:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:38.966 ************************************ 00:09:38.966 START TEST spdk_dd_negative 00:09:38.966 ************************************ 00:09:38.966 01:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:38.966 * Looking for test storage... 00:09:38.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:38.966 01:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:38.966 01:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:38.966 01:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lcov --version 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:39.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.227 --rc genhtml_branch_coverage=1 00:09:39.227 --rc genhtml_function_coverage=1 00:09:39.227 --rc genhtml_legend=1 00:09:39.227 --rc geninfo_all_blocks=1 00:09:39.227 --rc geninfo_unexecuted_blocks=1 00:09:39.227 00:09:39.227 ' 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:39.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.227 --rc genhtml_branch_coverage=1 00:09:39.227 --rc genhtml_function_coverage=1 00:09:39.227 --rc genhtml_legend=1 00:09:39.227 --rc geninfo_all_blocks=1 00:09:39.227 --rc geninfo_unexecuted_blocks=1 00:09:39.227 00:09:39.227 ' 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:39.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.227 --rc genhtml_branch_coverage=1 00:09:39.227 --rc genhtml_function_coverage=1 00:09:39.227 --rc genhtml_legend=1 00:09:39.227 --rc geninfo_all_blocks=1 00:09:39.227 --rc geninfo_unexecuted_blocks=1 00:09:39.227 00:09:39.227 ' 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:39.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.227 --rc genhtml_branch_coverage=1 00:09:39.227 --rc genhtml_function_coverage=1 00:09:39.227 --rc genhtml_legend=1 00:09:39.227 --rc geninfo_all_blocks=1 00:09:39.227 --rc geninfo_unexecuted_blocks=1 00:09:39.227 00:09:39.227 ' 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:39.227 ************************************ 00:09:39.227 START TEST dd_invalid_arguments 00:09:39.227 ************************************ 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:39.227 01:22:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:39.227 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:39.228 00:09:39.228 CPU options: 00:09:39.228 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:39.228 (like [0,1,10]) 00:09:39.228 --lcores lcore to CPU mapping list. The list is in the format: 00:09:39.228 [<,lcores[@CPUs]>...] 00:09:39.228 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:39.228 Within the group, '-' is used for range separator, 00:09:39.228 ',' is used for single number separator. 00:09:39.228 '( )' can be omitted for single element group, 00:09:39.228 '@' can be omitted if cpus and lcores have the same value 00:09:39.228 --disable-cpumask-locks Disable CPU core lock files. 00:09:39.228 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:39.228 pollers in the app support interrupt mode) 00:09:39.228 -p, --main-core main (primary) core for DPDK 00:09:39.228 00:09:39.228 Configuration options: 00:09:39.228 -c, --config, --json JSON config file 00:09:39.228 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:39.228 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:39.228 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:39.228 --rpcs-allowed comma-separated list of permitted RPCS 00:09:39.228 --json-ignore-init-errors don't exit on invalid config entry 00:09:39.228 00:09:39.228 Memory options: 00:09:39.228 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:39.228 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:39.228 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:39.228 -R, --huge-unlink unlink huge files after initialization 00:09:39.228 -n, --mem-channels number of memory channels used for DPDK 00:09:39.228 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:39.228 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:39.228 --no-huge run without using hugepages 00:09:39.228 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:09:39.228 -i, --shm-id shared memory ID (optional) 00:09:39.228 -g, --single-file-segments force creating just one hugetlbfs file 00:09:39.228 00:09:39.228 PCI options: 00:09:39.228 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:39.228 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:39.228 -u, --no-pci disable PCI access 00:09:39.228 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:39.228 00:09:39.228 Log options: 00:09:39.228 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:09:39.228 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:09:39.228 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:09:39.228 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:09:39.228 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, fuse_dispatcher, 00:09:39.228 gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, 00:09:39.228 lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, nvme_vfio, opal, 00:09:39.228 reactor, rpc, rpc_client, scsi, sock, sock_posix, spdk_aio_mgr_io, 00:09:39.228 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:09:39.228 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, 00:09:39.228 vfu_virtio, vfu_virtio_blk, vfu_virtio_fs, vfu_virtio_fs_data, 00:09:39.228 vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, 00:09:39.228 virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:39.228 --silence-noticelog disable notice level logging to stderr 00:09:39.228 00:09:39.228 Trace options: 00:09:39.228 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:39.228 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:39.228 [2024-09-28 01:22:35.031229] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:09:39.228 setting 0 to disable trace (default 32768) 00:09:39.228 Tracepoints vary in size and can use more than one trace entry. 00:09:39.228 -e, --tpoint-group [:] 00:09:39.228 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 00:09:39.228 ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 00:09:39.228 blob, bdev_raid, all). 00:09:39.228 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:39.228 a tracepoint group. First tpoint inside a group can be enabled by 00:09:39.228 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:39.228 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:39.228 in /include/spdk_internal/trace_defs.h 00:09:39.228 00:09:39.228 Other options: 00:09:39.228 -h, --help show this usage 00:09:39.228 -v, --version print SPDK version 00:09:39.228 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:39.228 --env-context Opaque context for use of the env implementation 00:09:39.228 00:09:39.228 Application specific: 00:09:39.228 [--------- DD Options ---------] 00:09:39.228 --if Input file. Must specify either --if or --ib. 00:09:39.228 --ib Input bdev. Must specifier either --if or --ib 00:09:39.228 --of Output file. Must specify either --of or --ob. 00:09:39.228 --ob Output bdev. Must specify either --of or --ob. 00:09:39.228 --iflag Input file flags. 00:09:39.228 --oflag Output file flags. 00:09:39.228 --bs I/O unit size (default: 4096) 00:09:39.228 --qd Queue depth (default: 2) 00:09:39.228 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:39.228 --skip Skip this many I/O units at start of input. (default: 0) 00:09:39.228 --seek Skip this many I/O units at start of output. (default: 0) 00:09:39.228 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:39.228 --sparse Enable hole skipping in input target 00:09:39.228 Available iflag and oflag values: 00:09:39.228 append - append mode 00:09:39.228 direct - use direct I/O for data 00:09:39.228 directory - fail unless a directory 00:09:39.228 dsync - use synchronized I/O for data 00:09:39.228 noatime - do not update access time 00:09:39.228 noctty - do not assign controlling terminal from file 00:09:39.228 nofollow - do not follow symlinks 00:09:39.228 nonblock - use non-blocking I/O 00:09:39.228 sync - use synchronized I/O for data and metadata 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:39.228 00:09:39.228 real 0m0.146s 00:09:39.228 user 0m0.082s 00:09:39.228 sys 0m0.063s 00:09:39.228 ************************************ 00:09:39.228 END TEST dd_invalid_arguments 00:09:39.228 ************************************ 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:39.228 ************************************ 00:09:39.228 START TEST dd_double_input 00:09:39.228 ************************************ 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:39.228 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:39.488 [2024-09-28 01:22:35.238091] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:39.488 00:09:39.488 real 0m0.168s 00:09:39.488 user 0m0.087s 00:09:39.488 sys 0m0.078s 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:09:39.488 ************************************ 00:09:39.488 END TEST dd_double_input 00:09:39.488 ************************************ 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:39.488 ************************************ 00:09:39.488 START TEST dd_double_output 00:09:39.488 ************************************ 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:39.488 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:39.748 [2024-09-28 01:22:35.440821] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:39.748 00:09:39.748 real 0m0.142s 00:09:39.748 user 0m0.070s 00:09:39.748 sys 0m0.071s 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.748 ************************************ 00:09:39.748 END TEST dd_double_output 00:09:39.748 ************************************ 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:39.748 ************************************ 00:09:39.748 START TEST dd_no_input 00:09:39.748 ************************************ 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:39.748 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:39.748 [2024-09-28 01:22:35.641571] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:40.008 00:09:40.008 real 0m0.148s 00:09:40.008 user 0m0.079s 00:09:40.008 sys 0m0.067s 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:09:40.008 ************************************ 00:09:40.008 END TEST dd_no_input 00:09:40.008 ************************************ 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:40.008 ************************************ 00:09:40.008 START TEST dd_no_output 00:09:40.008 ************************************ 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:40.008 [2024-09-28 01:22:35.860368] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:40.008 ************************************ 00:09:40.008 END TEST dd_no_output 00:09:40.008 ************************************ 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:40.008 00:09:40.008 real 0m0.173s 00:09:40.008 user 0m0.091s 00:09:40.008 sys 0m0.081s 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.008 01:22:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:09:40.267 01:22:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:40.267 01:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:40.267 01:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.267 01:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:40.267 ************************************ 00:09:40.267 START TEST dd_wrong_blocksize 00:09:40.267 ************************************ 00:09:40.267 01:22:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:09:40.267 01:22:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:40.267 01:22:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:09:40.267 01:22:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:40.267 01:22:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.267 01:22:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.267 01:22:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.267 01:22:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.267 01:22:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.267 01:22:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.267 01:22:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.267 01:22:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:40.267 01:22:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:40.267 [2024-09-28 01:22:36.083563] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:09:40.267 01:22:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:09:40.267 01:22:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:40.267 01:22:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:40.267 ************************************ 00:09:40.267 END TEST dd_wrong_blocksize 00:09:40.267 ************************************ 00:09:40.267 01:22:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:40.267 00:09:40.267 real 0m0.172s 00:09:40.267 user 0m0.092s 00:09:40.267 sys 0m0.078s 00:09:40.267 01:22:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.267 01:22:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:40.267 01:22:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:40.267 01:22:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:40.267 01:22:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.267 01:22:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:40.267 ************************************ 00:09:40.267 START TEST dd_smaller_blocksize 00:09:40.267 ************************************ 00:09:40.267 01:22:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:09:40.267 01:22:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:40.267 01:22:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:09:40.267 01:22:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:40.267 01:22:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.526 01:22:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.526 01:22:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.526 01:22:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.526 01:22:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.526 01:22:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.526 01:22:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.526 01:22:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:40.526 01:22:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:40.526 [2024-09-28 01:22:36.311803] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:40.526 [2024-09-28 01:22:36.312226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64117 ] 00:09:40.785 [2024-09-28 01:22:36.485582] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.785 [2024-09-28 01:22:36.696618] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.044 [2024-09-28 01:22:36.852627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:41.303 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:41.561 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:41.820 [2024-09-28 01:22:37.549681] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:41.821 [2024-09-28 01:22:37.549787] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:42.388 [2024-09-28 01:22:38.190316] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:42.646 01:22:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:09:42.646 01:22:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:42.646 01:22:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:09:42.646 01:22:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:09:42.646 01:22:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:09:42.646 01:22:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:42.646 00:09:42.646 real 0m2.377s 00:09:42.646 user 0m1.588s 00:09:42.646 sys 0m0.674s 00:09:42.646 01:22:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.646 ************************************ 00:09:42.646 END TEST dd_smaller_blocksize 00:09:42.646 ************************************ 00:09:42.646 01:22:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:42.904 ************************************ 00:09:42.904 START TEST dd_invalid_count 00:09:42.904 ************************************ 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:42.904 [2024-09-28 01:22:38.735178] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:42.904 00:09:42.904 real 0m0.167s 00:09:42.904 user 0m0.091s 00:09:42.904 sys 0m0.074s 00:09:42.904 ************************************ 00:09:42.904 END TEST dd_invalid_count 00:09:42.904 ************************************ 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.904 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:43.164 ************************************ 00:09:43.164 START TEST dd_invalid_oflag 00:09:43.164 ************************************ 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:43.164 [2024-09-28 01:22:38.938217] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:43.164 00:09:43.164 real 0m0.139s 00:09:43.164 user 0m0.075s 00:09:43.164 sys 0m0.063s 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.164 ************************************ 00:09:43.164 END TEST dd_invalid_oflag 00:09:43.164 ************************************ 00:09:43.164 01:22:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:09:43.164 01:22:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:09:43.164 01:22:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:43.164 01:22:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.164 01:22:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:43.164 ************************************ 00:09:43.164 START TEST dd_invalid_iflag 00:09:43.164 ************************************ 00:09:43.164 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:09:43.164 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:43.164 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:09:43.164 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:43.164 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:43.164 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.164 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:43.164 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.164 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:43.164 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.164 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:43.164 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:43.164 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:43.424 [2024-09-28 01:22:39.176678] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:09:43.424 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:09:43.424 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:43.424 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:43.424 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:43.424 00:09:43.424 real 0m0.198s 00:09:43.424 user 0m0.121s 00:09:43.424 sys 0m0.073s 00:09:43.424 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.424 01:22:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:09:43.424 ************************************ 00:09:43.424 END TEST dd_invalid_iflag 00:09:43.424 ************************************ 00:09:43.424 01:22:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:09:43.424 01:22:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:43.424 01:22:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.424 01:22:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:43.425 ************************************ 00:09:43.425 START TEST dd_unknown_flag 00:09:43.425 ************************************ 00:09:43.425 01:22:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:09:43.425 01:22:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:43.425 01:22:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:09:43.425 01:22:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:43.425 01:22:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:43.425 01:22:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.425 01:22:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:43.425 01:22:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.425 01:22:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:43.425 01:22:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.425 01:22:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:43.425 01:22:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:43.425 01:22:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:43.684 [2024-09-28 01:22:39.395420] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:43.684 [2024-09-28 01:22:39.395571] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64235 ] 00:09:43.684 [2024-09-28 01:22:39.557523] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.943 [2024-09-28 01:22:39.721181] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.201 [2024-09-28 01:22:39.876594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:44.201 [2024-09-28 01:22:39.966369] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:44.201 [2024-09-28 01:22:39.966449] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:44.201 [2024-09-28 01:22:39.966543] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:44.201 [2024-09-28 01:22:39.966566] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:44.201 [2024-09-28 01:22:39.966845] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:09:44.201 [2024-09-28 01:22:39.966888] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:44.201 [2024-09-28 01:22:39.966976] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:44.201 [2024-09-28 01:22:39.966997] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:44.769 [2024-09-28 01:22:40.605902] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:45.336 01:22:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:09:45.336 01:22:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:45.336 01:22:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:09:45.336 01:22:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:09:45.336 01:22:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:09:45.336 01:22:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:45.336 00:09:45.336 real 0m1.703s 00:09:45.336 user 0m1.409s 00:09:45.336 sys 0m0.192s 00:09:45.336 01:22:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.336 01:22:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:09:45.337 ************************************ 00:09:45.337 END TEST dd_unknown_flag 00:09:45.337 ************************************ 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:45.337 ************************************ 00:09:45.337 START TEST dd_invalid_json 00:09:45.337 ************************************ 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:45.337 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:45.337 [2024-09-28 01:22:41.160031] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:45.337 [2024-09-28 01:22:41.160257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64270 ] 00:09:45.596 [2024-09-28 01:22:41.331744] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.596 [2024-09-28 01:22:41.502759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.596 [2024-09-28 01:22:41.502932] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:09:45.596 [2024-09-28 01:22:41.502960] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:45.596 [2024-09-28 01:22:41.502989] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:45.596 [2024-09-28 01:22:41.503061] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:46.168 00:09:46.168 real 0m0.843s 00:09:46.168 user 0m0.594s 00:09:46.168 sys 0m0.144s 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.168 ************************************ 00:09:46.168 END TEST dd_invalid_json 00:09:46.168 ************************************ 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:46.168 ************************************ 00:09:46.168 START TEST dd_invalid_seek 00:09:46.168 ************************************ 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:46.168 01:22:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:46.168 { 00:09:46.168 "subsystems": [ 00:09:46.168 { 00:09:46.168 "subsystem": "bdev", 00:09:46.168 "config": [ 00:09:46.168 { 00:09:46.168 "params": { 00:09:46.168 "block_size": 512, 00:09:46.168 "num_blocks": 512, 00:09:46.168 "name": "malloc0" 00:09:46.168 }, 00:09:46.168 "method": "bdev_malloc_create" 00:09:46.168 }, 00:09:46.168 { 00:09:46.168 "params": { 00:09:46.168 "block_size": 512, 00:09:46.168 "num_blocks": 512, 00:09:46.168 "name": "malloc1" 00:09:46.168 }, 00:09:46.168 "method": "bdev_malloc_create" 00:09:46.168 }, 00:09:46.168 { 00:09:46.168 "method": "bdev_wait_for_examine" 00:09:46.168 } 00:09:46.168 ] 00:09:46.168 } 00:09:46.168 ] 00:09:46.168 } 00:09:46.168 [2024-09-28 01:22:42.057568] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:46.168 [2024-09-28 01:22:42.057734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64306 ] 00:09:46.428 [2024-09-28 01:22:42.240855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.687 [2024-09-28 01:22:42.411303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.687 [2024-09-28 01:22:42.580293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:46.945 [2024-09-28 01:22:42.690775] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:09:46.945 [2024-09-28 01:22:42.690858] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:47.512 [2024-09-28 01:22:43.334865] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:48.080 00:09:48.080 real 0m1.796s 00:09:48.080 user 0m1.530s 00:09:48.080 sys 0m0.219s 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.080 ************************************ 00:09:48.080 END TEST dd_invalid_seek 00:09:48.080 ************************************ 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:48.080 ************************************ 00:09:48.080 START TEST dd_invalid_skip 00:09:48.080 ************************************ 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:48.080 01:22:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:48.080 { 00:09:48.080 "subsystems": [ 00:09:48.080 { 00:09:48.080 "subsystem": "bdev", 00:09:48.080 "config": [ 00:09:48.080 { 00:09:48.080 "params": { 00:09:48.080 "block_size": 512, 00:09:48.080 "num_blocks": 512, 00:09:48.080 "name": "malloc0" 00:09:48.080 }, 00:09:48.080 "method": "bdev_malloc_create" 00:09:48.080 }, 00:09:48.080 { 00:09:48.080 "params": { 00:09:48.080 "block_size": 512, 00:09:48.080 "num_blocks": 512, 00:09:48.080 "name": "malloc1" 00:09:48.080 }, 00:09:48.080 "method": "bdev_malloc_create" 00:09:48.080 }, 00:09:48.080 { 00:09:48.080 "method": "bdev_wait_for_examine" 00:09:48.080 } 00:09:48.080 ] 00:09:48.080 } 00:09:48.080 ] 00:09:48.080 } 00:09:48.080 [2024-09-28 01:22:43.901692] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:48.080 [2024-09-28 01:22:43.901840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64359 ] 00:09:48.339 [2024-09-28 01:22:44.071749] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.339 [2024-09-28 01:22:44.250757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.599 [2024-09-28 01:22:44.426554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:48.858 [2024-09-28 01:22:44.553371] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:09:48.858 [2024-09-28 01:22:44.553498] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:49.426 [2024-09-28 01:22:45.242124] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:49.995 00:09:49.995 real 0m1.875s 00:09:49.995 user 0m1.586s 00:09:49.995 sys 0m0.245s 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:49.995 ************************************ 00:09:49.995 END TEST dd_invalid_skip 00:09:49.995 ************************************ 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:49.995 ************************************ 00:09:49.995 START TEST dd_invalid_input_count 00:09:49.995 ************************************ 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:49.995 01:22:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:49.995 { 00:09:49.995 "subsystems": [ 00:09:49.995 { 00:09:49.995 "subsystem": "bdev", 00:09:49.995 "config": [ 00:09:49.995 { 00:09:49.995 "params": { 00:09:49.995 "block_size": 512, 00:09:49.995 "num_blocks": 512, 00:09:49.995 "name": "malloc0" 00:09:49.995 }, 00:09:49.995 "method": "bdev_malloc_create" 00:09:49.995 }, 00:09:49.995 { 00:09:49.995 "params": { 00:09:49.995 "block_size": 512, 00:09:49.995 "num_blocks": 512, 00:09:49.995 "name": "malloc1" 00:09:49.995 }, 00:09:49.995 "method": "bdev_malloc_create" 00:09:49.995 }, 00:09:49.995 { 00:09:49.995 "method": "bdev_wait_for_examine" 00:09:49.995 } 00:09:49.995 ] 00:09:49.995 } 00:09:49.995 ] 00:09:49.995 } 00:09:49.995 [2024-09-28 01:22:45.842258] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:49.995 [2024-09-28 01:22:45.842430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64410 ] 00:09:50.254 [2024-09-28 01:22:46.015247] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.512 [2024-09-28 01:22:46.199884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.513 [2024-09-28 01:22:46.385391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:50.771 [2024-09-28 01:22:46.513991] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:09:50.771 [2024-09-28 01:22:46.514074] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:51.338 [2024-09-28 01:22:47.177954] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:51.906 00:09:51.906 real 0m1.831s 00:09:51.906 user 0m1.545s 00:09:51.906 sys 0m0.238s 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:51.906 ************************************ 00:09:51.906 END TEST dd_invalid_input_count 00:09:51.906 ************************************ 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:51.906 ************************************ 00:09:51.906 START TEST dd_invalid_output_count 00:09:51.906 ************************************ 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:51.906 01:22:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:51.906 { 00:09:51.906 "subsystems": [ 00:09:51.906 { 00:09:51.906 "subsystem": "bdev", 00:09:51.906 "config": [ 00:09:51.906 { 00:09:51.906 "params": { 00:09:51.906 "block_size": 512, 00:09:51.906 "num_blocks": 512, 00:09:51.906 "name": "malloc0" 00:09:51.906 }, 00:09:51.906 "method": "bdev_malloc_create" 00:09:51.906 }, 00:09:51.906 { 00:09:51.906 "method": "bdev_wait_for_examine" 00:09:51.906 } 00:09:51.906 ] 00:09:51.906 } 00:09:51.906 ] 00:09:51.906 } 00:09:51.906 [2024-09-28 01:22:47.723321] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:51.906 [2024-09-28 01:22:47.723525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64456 ] 00:09:52.165 [2024-09-28 01:22:47.894705] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.165 [2024-09-28 01:22:48.067233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.424 [2024-09-28 01:22:48.221599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:52.424 [2024-09-28 01:22:48.331746] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:09:52.424 [2024-09-28 01:22:48.331847] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:53.361 [2024-09-28 01:22:48.962715] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:53.620 ************************************ 00:09:53.620 END TEST dd_invalid_output_count 00:09:53.620 ************************************ 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:53.620 00:09:53.620 real 0m1.752s 00:09:53.620 user 0m1.468s 00:09:53.620 sys 0m0.235s 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:53.620 ************************************ 00:09:53.620 START TEST dd_bs_not_multiple 00:09:53.620 ************************************ 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:53.620 01:22:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:53.620 { 00:09:53.620 "subsystems": [ 00:09:53.620 { 00:09:53.620 "subsystem": "bdev", 00:09:53.620 "config": [ 00:09:53.620 { 00:09:53.620 "params": { 00:09:53.620 "block_size": 512, 00:09:53.620 "num_blocks": 512, 00:09:53.620 "name": "malloc0" 00:09:53.620 }, 00:09:53.620 "method": "bdev_malloc_create" 00:09:53.620 }, 00:09:53.620 { 00:09:53.620 "params": { 00:09:53.620 "block_size": 512, 00:09:53.620 "num_blocks": 512, 00:09:53.620 "name": "malloc1" 00:09:53.620 }, 00:09:53.620 "method": "bdev_malloc_create" 00:09:53.620 }, 00:09:53.620 { 00:09:53.620 "method": "bdev_wait_for_examine" 00:09:53.620 } 00:09:53.620 ] 00:09:53.620 } 00:09:53.620 ] 00:09:53.620 } 00:09:53.620 [2024-09-28 01:22:49.530575] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:53.620 [2024-09-28 01:22:49.530737] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64499 ] 00:09:53.879 [2024-09-28 01:22:49.703774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.138 [2024-09-28 01:22:49.861725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.138 [2024-09-28 01:22:50.033320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:54.396 [2024-09-28 01:22:50.145791] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:09:54.396 [2024-09-28 01:22:50.145875] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:54.963 [2024-09-28 01:22:50.795233] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:55.556 01:22:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:09:55.556 01:22:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:55.556 01:22:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:09:55.556 01:22:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:09:55.556 01:22:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:09:55.556 01:22:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:55.556 00:09:55.556 real 0m1.775s 00:09:55.556 user 0m1.520s 00:09:55.556 sys 0m0.226s 00:09:55.556 01:22:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.556 ************************************ 00:09:55.556 END TEST dd_bs_not_multiple 00:09:55.556 ************************************ 00:09:55.556 01:22:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:55.556 00:09:55.556 real 0m16.485s 00:09:55.556 user 0m12.424s 00:09:55.556 sys 0m3.429s 00:09:55.556 01:22:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.556 01:22:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:55.556 ************************************ 00:09:55.556 END TEST spdk_dd_negative 00:09:55.556 ************************************ 00:09:55.556 00:09:55.556 real 3m1.398s 00:09:55.556 user 2m27.710s 00:09:55.556 sys 1m2.065s 00:09:55.556 01:22:51 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.556 01:22:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:55.556 ************************************ 00:09:55.556 END TEST spdk_dd 00:09:55.556 ************************************ 00:09:55.556 01:22:51 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:55.556 01:22:51 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:09:55.556 01:22:51 -- spdk/autotest.sh@256 -- # timing_exit lib 00:09:55.556 01:22:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:55.556 01:22:51 -- common/autotest_common.sh@10 -- # set +x 00:09:55.556 01:22:51 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:09:55.556 01:22:51 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:09:55.556 01:22:51 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:09:55.556 01:22:51 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:09:55.556 01:22:51 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:09:55.556 01:22:51 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:09:55.556 01:22:51 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:55.556 01:22:51 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:55.556 01:22:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:55.556 01:22:51 -- common/autotest_common.sh@10 -- # set +x 00:09:55.556 ************************************ 00:09:55.556 START TEST nvmf_tcp 00:09:55.556 ************************************ 00:09:55.556 01:22:51 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:55.556 * Looking for test storage... 00:09:55.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:55.556 01:22:51 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:55.556 01:22:51 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:55.556 01:22:51 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:55.815 01:22:51 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.815 01:22:51 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:55.815 01:22:51 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.815 01:22:51 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:55.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.815 --rc genhtml_branch_coverage=1 00:09:55.815 --rc genhtml_function_coverage=1 00:09:55.815 --rc genhtml_legend=1 00:09:55.815 --rc geninfo_all_blocks=1 00:09:55.815 --rc geninfo_unexecuted_blocks=1 00:09:55.815 00:09:55.815 ' 00:09:55.815 01:22:51 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:55.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.815 --rc genhtml_branch_coverage=1 00:09:55.815 --rc genhtml_function_coverage=1 00:09:55.815 --rc genhtml_legend=1 00:09:55.815 --rc geninfo_all_blocks=1 00:09:55.815 --rc geninfo_unexecuted_blocks=1 00:09:55.815 00:09:55.815 ' 00:09:55.815 01:22:51 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:55.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.815 --rc genhtml_branch_coverage=1 00:09:55.815 --rc genhtml_function_coverage=1 00:09:55.815 --rc genhtml_legend=1 00:09:55.815 --rc geninfo_all_blocks=1 00:09:55.815 --rc geninfo_unexecuted_blocks=1 00:09:55.815 00:09:55.815 ' 00:09:55.815 01:22:51 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:55.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.815 --rc genhtml_branch_coverage=1 00:09:55.815 --rc genhtml_function_coverage=1 00:09:55.815 --rc genhtml_legend=1 00:09:55.815 --rc geninfo_all_blocks=1 00:09:55.815 --rc geninfo_unexecuted_blocks=1 00:09:55.815 00:09:55.815 ' 00:09:55.815 01:22:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:55.815 01:22:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:55.815 01:22:51 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:55.815 01:22:51 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:55.815 01:22:51 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:55.815 01:22:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:55.815 ************************************ 00:09:55.815 START TEST nvmf_target_core 00:09:55.815 ************************************ 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:55.815 * Looking for test storage... 00:09:55.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:55.815 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:55.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.816 --rc genhtml_branch_coverage=1 00:09:55.816 --rc genhtml_function_coverage=1 00:09:55.816 --rc genhtml_legend=1 00:09:55.816 --rc geninfo_all_blocks=1 00:09:55.816 --rc geninfo_unexecuted_blocks=1 00:09:55.816 00:09:55.816 ' 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:55.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.816 --rc genhtml_branch_coverage=1 00:09:55.816 --rc genhtml_function_coverage=1 00:09:55.816 --rc genhtml_legend=1 00:09:55.816 --rc geninfo_all_blocks=1 00:09:55.816 --rc geninfo_unexecuted_blocks=1 00:09:55.816 00:09:55.816 ' 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:55.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.816 --rc genhtml_branch_coverage=1 00:09:55.816 --rc genhtml_function_coverage=1 00:09:55.816 --rc genhtml_legend=1 00:09:55.816 --rc geninfo_all_blocks=1 00:09:55.816 --rc geninfo_unexecuted_blocks=1 00:09:55.816 00:09:55.816 ' 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:55.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.816 --rc genhtml_branch_coverage=1 00:09:55.816 --rc genhtml_function_coverage=1 00:09:55.816 --rc genhtml_legend=1 00:09:55.816 --rc geninfo_all_blocks=1 00:09:55.816 --rc geninfo_unexecuted_blocks=1 00:09:55.816 00:09:55.816 ' 00:09:55.816 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:56.077 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.077 ************************************ 00:09:56.077 START TEST nvmf_host_management 00:09:56.077 ************************************ 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:56.077 * Looking for test storage... 00:09:56.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.077 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:56.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.078 --rc genhtml_branch_coverage=1 00:09:56.078 --rc genhtml_function_coverage=1 00:09:56.078 --rc genhtml_legend=1 00:09:56.078 --rc geninfo_all_blocks=1 00:09:56.078 --rc geninfo_unexecuted_blocks=1 00:09:56.078 00:09:56.078 ' 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:56.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.078 --rc genhtml_branch_coverage=1 00:09:56.078 --rc genhtml_function_coverage=1 00:09:56.078 --rc genhtml_legend=1 00:09:56.078 --rc geninfo_all_blocks=1 00:09:56.078 --rc geninfo_unexecuted_blocks=1 00:09:56.078 00:09:56.078 ' 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:56.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.078 --rc genhtml_branch_coverage=1 00:09:56.078 --rc genhtml_function_coverage=1 00:09:56.078 --rc genhtml_legend=1 00:09:56.078 --rc geninfo_all_blocks=1 00:09:56.078 --rc geninfo_unexecuted_blocks=1 00:09:56.078 00:09:56.078 ' 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:56.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.078 --rc genhtml_branch_coverage=1 00:09:56.078 --rc genhtml_function_coverage=1 00:09:56.078 --rc genhtml_legend=1 00:09:56.078 --rc geninfo_all_blocks=1 00:09:56.078 --rc geninfo_unexecuted_blocks=1 00:09:56.078 00:09:56.078 ' 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:56.078 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:56.078 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.079 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.079 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.079 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:56.079 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:56.079 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:56.079 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:56.079 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:56.079 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:56.079 01:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:56.079 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:56.079 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:56.079 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:56.079 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.079 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:56.079 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:56.079 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:56.079 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:56.079 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:56.079 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:56.079 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.079 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:56.079 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:56.079 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:56.079 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:56.079 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:56.338 Cannot find device "nvmf_init_br" 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:56.338 Cannot find device "nvmf_init_br2" 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:56.338 Cannot find device "nvmf_tgt_br" 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:56.338 Cannot find device "nvmf_tgt_br2" 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:56.338 Cannot find device "nvmf_init_br" 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:56.338 Cannot find device "nvmf_init_br2" 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:56.338 Cannot find device "nvmf_tgt_br" 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:56.338 Cannot find device "nvmf_tgt_br2" 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:56.338 Cannot find device "nvmf_br" 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:56.338 Cannot find device "nvmf_init_if" 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:56.338 Cannot find device "nvmf_init_if2" 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:56.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:56.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:56.338 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:56.598 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:56.598 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.143 ms 00:09:56.598 00:09:56.598 --- 10.0.0.3 ping statistics --- 00:09:56.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.598 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:56.598 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:56.598 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:09:56.598 00:09:56.598 --- 10.0.0.4 ping statistics --- 00:09:56.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.598 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:56.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:09:56.598 00:09:56.598 --- 10.0.0.1 ping statistics --- 00:09:56.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.598 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:56.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:09:56.598 00:09:56.598 --- 10.0.0.2 ping statistics --- 00:09:56.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.598 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # return 0 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=64860 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 64860 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64860 ']' 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.598 01:22:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:56.857 [2024-09-28 01:22:52.640744] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:56.857 [2024-09-28 01:22:52.640944] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.116 [2024-09-28 01:22:52.811555] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.116 [2024-09-28 01:22:52.968790] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.116 [2024-09-28 01:22:52.968872] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.116 [2024-09-28 01:22:52.968907] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.116 [2024-09-28 01:22:52.968917] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.116 [2024-09-28 01:22:52.968928] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.116 [2024-09-28 01:22:52.969136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.116 [2024-09-28 01:22:52.969306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.116 [2024-09-28 01:22:52.969424] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.116 [2024-09-28 01:22:52.969470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:57.375 [2024-09-28 01:22:53.137357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:57.633 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.633 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:57.633 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:57.633 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:57.633 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:57.892 [2024-09-28 01:22:53.596088] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:57.892 Malloc0 00:09:57.892 [2024-09-28 01:22:53.705080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64915 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64915 /var/tmp/bdevperf.sock 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64915 ']' 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:57.892 { 00:09:57.892 "params": { 00:09:57.892 "name": "Nvme$subsystem", 00:09:57.892 "trtype": "$TEST_TRANSPORT", 00:09:57.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:57.892 "adrfam": "ipv4", 00:09:57.892 "trsvcid": "$NVMF_PORT", 00:09:57.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:57.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:57.892 "hdgst": ${hdgst:-false}, 00:09:57.892 "ddgst": ${ddgst:-false} 00:09:57.892 }, 00:09:57.892 "method": "bdev_nvme_attach_controller" 00:09:57.892 } 00:09:57.892 EOF 00:09:57.892 )") 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:09:57.892 01:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:57.892 "params": { 00:09:57.892 "name": "Nvme0", 00:09:57.892 "trtype": "tcp", 00:09:57.892 "traddr": "10.0.0.3", 00:09:57.892 "adrfam": "ipv4", 00:09:57.892 "trsvcid": "4420", 00:09:57.892 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:57.892 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:57.892 "hdgst": false, 00:09:57.892 "ddgst": false 00:09:57.892 }, 00:09:57.892 "method": "bdev_nvme_attach_controller" 00:09:57.892 }' 00:09:58.151 [2024-09-28 01:22:53.869541] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:58.151 [2024-09-28 01:22:53.869717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64915 ] 00:09:58.151 [2024-09-28 01:22:54.060741] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.411 [2024-09-28 01:22:54.307766] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.669 [2024-09-28 01:22:54.494248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:58.929 Running I/O for 10 seconds... 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.190 01:22:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:59.190 01:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.190 01:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:09:59.190 01:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:09:59.190 01:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:59.190 01:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:59.190 01:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:59.190 01:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:59.190 01:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.190 01:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:59.190 [2024-09-28 01:22:55.055883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.190 [2024-09-28 01:22:55.055959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.190 [2024-09-28 01:22:55.055976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.190 [2024-09-28 01:22:55.055988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.190 [2024-09-28 01:22:55.055999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.190 [2024-09-28 01:22:55.055990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:59.190 [2024-09-28 01:22:55.056011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.190 [2024-09-28 01:22:55.056022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.190 [2024-09-28 01:22:55.056037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.191 [2024-09-28 01:22:55.056048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:59.191 [2024-09-28 01:22:55.056071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.191 [2024-09-28 01:22:55.056086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:59.191 [2024-09-28 01:22:55.056097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.191 [2024-09-28 01:22:55.056109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:59.191 [2024-09-28 01:22:55.056120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.191 [2024-09-28 01:22:55.056133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:09:59.191 [2024-09-28 01:22:55.056919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.191 [2024-09-28 01:22:55.056944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.191 [2024-09-28 01:22:55.056971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.191 [2024-09-28 01:22:55.056985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.191 [2024-09-28 01:22:55.057000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.191 [2024-09-28 01:22:55.057013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.191 [2024-09-28 01:22:55.057028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.191 [2024-09-28 01:22:55.057055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.191 [2024-09-28 01:22:55.057071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.191 [2024-09-28 01:22:55.057084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.191 [2024-09-28 01:22:55.057099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.191 [2024-09-28 01:22:55.057111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.191 [2024-09-28 01:22:55.057126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.191 [2024-09-28 01:22:55.057138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.191 [2024-09-28 01:22:55.057152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.191 [2024-09-28 01:22:55.057164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.191 [2024-09-28 01:22:55.057178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.191 [2024-09-28 01:22:55.057194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.191 [2024-09-28 01:22:55.057209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.191 [2024-09-28 01:22:55.057221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.057951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.057990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.058003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.058017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.058029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.058042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.058054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.058068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.058080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.058094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.058106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.058120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.058132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.058145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.058157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.058171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.058185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.058200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.058213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.058227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.058255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.058269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.058282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.058296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.058318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.058335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.058348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.058363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.058375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.058389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.058402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.058416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.058428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.192 [2024-09-28 01:22:55.058470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.192 [2024-09-28 01:22:55.058503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.193 [2024-09-28 01:22:55.058519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.193 [2024-09-28 01:22:55.058532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.193 [2024-09-28 01:22:55.058547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.193 [2024-09-28 01:22:55.058560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.193 [2024-09-28 01:22:55.058576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.193 [2024-09-28 01:22:55.058589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.193 [2024-09-28 01:22:55.058604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.193 [2024-09-28 01:22:55.058618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.193 [2024-09-28 01:22:55.058634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.193 [2024-09-28 01:22:55.058647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.193 [2024-09-28 01:22:55.058662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.193 [2024-09-28 01:22:55.058675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.193 [2024-09-28 01:22:55.058690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.193 [2024-09-28 01:22:55.058705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.193 [2024-09-28 01:22:55.058721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.193 [2024-09-28 01:22:55.058734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.193 [2024-09-28 01:22:55.058749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.193 [2024-09-28 01:22:55.058763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.193 [2024-09-28 01:22:55.058777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.193 [2024-09-28 01:22:55.058790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.193 [2024-09-28 01:22:55.058806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.193 [2024-09-28 01:22:55.058821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.193 [2024-09-28 01:22:55.058864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.193 [2024-09-28 01:22:55.058876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.193 [2024-09-28 01:22:55.058890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.193 [2024-09-28 01:22:55.058903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.193 [2024-09-28 01:22:55.058944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.193 [2024-09-28 01:22:55.058959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.193 [2024-09-28 01:22:55.058973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:09:59.193 01:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.193 [2024-09-28 01:22:55.059241] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:09:59.193 01:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:59.193 01:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.193 01:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:59.193 task offset: 57344 on job bdev=Nvme0n1 fails 00:09:59.193 00:09:59.193 Latency(us) 00:09:59.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.193 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:59.193 Job: Nvme0n1 ended in about 0.37 seconds with error 00:09:59.193 Verification LBA range: start 0x0 length 0x400 00:09:59.193 Nvme0n1 : 0.37 1200.51 75.03 171.50 0.00 44900.57 4319.42 47424.23 00:09:59.193 =================================================================================================================== 00:09:59.193 Total : 1200.51 75.03 171.50 0.00 44900.57 4319.42 47424.23 00:09:59.193 [2024-09-28 01:22:55.060625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:59.193 [2024-09-28 01:22:55.065676] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:59.193 [2024-09-28 01:22:55.065718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:09:59.193 01:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.193 01:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:59.193 [2024-09-28 01:22:55.079130] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:00.568 01:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64915 00:10:00.568 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64915) - No such process 00:10:00.568 01:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:00.568 01:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:00.568 01:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:00.568 01:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:00.568 01:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:10:00.568 01:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:10:00.568 01:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:00.568 01:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:00.568 { 00:10:00.569 "params": { 00:10:00.569 "name": "Nvme$subsystem", 00:10:00.569 "trtype": "$TEST_TRANSPORT", 00:10:00.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.569 "adrfam": "ipv4", 00:10:00.569 "trsvcid": "$NVMF_PORT", 00:10:00.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.569 "hdgst": ${hdgst:-false}, 00:10:00.569 "ddgst": ${ddgst:-false} 00:10:00.569 }, 00:10:00.569 "method": "bdev_nvme_attach_controller" 00:10:00.569 } 00:10:00.569 EOF 00:10:00.569 )") 00:10:00.569 01:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:10:00.569 01:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:10:00.569 01:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:10:00.569 01:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:00.569 "params": { 00:10:00.569 "name": "Nvme0", 00:10:00.569 "trtype": "tcp", 00:10:00.569 "traddr": "10.0.0.3", 00:10:00.569 "adrfam": "ipv4", 00:10:00.569 "trsvcid": "4420", 00:10:00.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:00.569 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:00.569 "hdgst": false, 00:10:00.569 "ddgst": false 00:10:00.569 }, 00:10:00.569 "method": "bdev_nvme_attach_controller" 00:10:00.569 }' 00:10:00.569 [2024-09-28 01:22:56.186613] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:00.569 [2024-09-28 01:22:56.186803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64954 ] 00:10:00.569 [2024-09-28 01:22:56.363306] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.828 [2024-09-28 01:22:56.549344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.828 [2024-09-28 01:22:56.735536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.086 Running I/O for 1 seconds... 00:10:02.279 1344.00 IOPS, 84.00 MiB/s 00:10:02.279 Latency(us) 00:10:02.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.279 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:02.279 Verification LBA range: start 0x0 length 0x400 00:10:02.279 Nvme0n1 : 1.04 1353.55 84.60 0.00 0.00 46402.34 5868.45 41704.73 00:10:02.279 =================================================================================================================== 00:10:02.279 Total : 1353.55 84.60 0.00 0.00 46402.34 5868.45 41704.73 00:10:03.214 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:03.214 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:03.214 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:10:03.214 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:03.214 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:03.214 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:03.214 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.473 rmmod nvme_tcp 00:10:03.473 rmmod nvme_fabrics 00:10:03.473 rmmod nvme_keyring 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 64860 ']' 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 64860 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 64860 ']' 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 64860 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64860 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64860' 00:10:03.473 killing process with pid 64860 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 64860 00:10:03.473 01:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 64860 00:10:04.849 [2024-09-28 01:23:00.474417] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:04.849 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:04.849 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:04.849 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.850 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.109 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:10:05.109 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:05.109 00:10:05.109 real 0m9.030s 00:10:05.109 user 0m33.850s 00:10:05.109 sys 0m1.781s 00:10:05.109 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.109 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:05.109 ************************************ 00:10:05.109 END TEST nvmf_host_management 00:10:05.109 ************************************ 00:10:05.109 01:23:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:05.109 01:23:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:05.109 01:23:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.109 01:23:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.109 ************************************ 00:10:05.109 START TEST nvmf_lvol 00:10:05.109 ************************************ 00:10:05.109 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:05.109 * Looking for test storage... 00:10:05.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:05.109 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:05.109 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:10:05.109 01:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:05.109 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.369 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.369 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.369 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:05.369 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.369 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:05.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.369 --rc genhtml_branch_coverage=1 00:10:05.369 --rc genhtml_function_coverage=1 00:10:05.369 --rc genhtml_legend=1 00:10:05.369 --rc geninfo_all_blocks=1 00:10:05.369 --rc geninfo_unexecuted_blocks=1 00:10:05.369 00:10:05.369 ' 00:10:05.369 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:05.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.369 --rc genhtml_branch_coverage=1 00:10:05.369 --rc genhtml_function_coverage=1 00:10:05.369 --rc genhtml_legend=1 00:10:05.369 --rc geninfo_all_blocks=1 00:10:05.369 --rc geninfo_unexecuted_blocks=1 00:10:05.369 00:10:05.369 ' 00:10:05.369 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:05.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.369 --rc genhtml_branch_coverage=1 00:10:05.369 --rc genhtml_function_coverage=1 00:10:05.369 --rc genhtml_legend=1 00:10:05.369 --rc geninfo_all_blocks=1 00:10:05.369 --rc geninfo_unexecuted_blocks=1 00:10:05.369 00:10:05.369 ' 00:10:05.369 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:05.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.369 --rc genhtml_branch_coverage=1 00:10:05.369 --rc genhtml_function_coverage=1 00:10:05.369 --rc genhtml_legend=1 00:10:05.370 --rc geninfo_all_blocks=1 00:10:05.370 --rc geninfo_unexecuted_blocks=1 00:10:05.370 00:10:05.370 ' 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.370 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:05.370 Cannot find device "nvmf_init_br" 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:05.370 Cannot find device "nvmf_init_br2" 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:05.370 Cannot find device "nvmf_tgt_br" 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:05.370 Cannot find device "nvmf_tgt_br2" 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:10:05.370 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:05.370 Cannot find device "nvmf_init_br" 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:05.371 Cannot find device "nvmf_init_br2" 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:05.371 Cannot find device "nvmf_tgt_br" 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:05.371 Cannot find device "nvmf_tgt_br2" 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:05.371 Cannot find device "nvmf_br" 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:05.371 Cannot find device "nvmf_init_if" 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:05.371 Cannot find device "nvmf_init_if2" 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:05.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:05.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:05.371 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:05.630 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:05.630 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:10:05.630 00:10:05.630 --- 10.0.0.3 ping statistics --- 00:10:05.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.630 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:05.630 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:05.630 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:10:05.630 00:10:05.630 --- 10.0.0.4 ping statistics --- 00:10:05.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.630 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:05.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:10:05.630 00:10:05.630 --- 10.0.0.1 ping statistics --- 00:10:05.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.630 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:05.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:10:05.630 00:10:05.630 --- 10.0.0.2 ping statistics --- 00:10:05.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.630 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # return 0 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=65248 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 65248 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 65248 ']' 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.630 01:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:05.889 [2024-09-28 01:23:01.646388] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:05.889 [2024-09-28 01:23:01.646563] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.148 [2024-09-28 01:23:01.825641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:06.148 [2024-09-28 01:23:02.024476] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.148 [2024-09-28 01:23:02.024550] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.148 [2024-09-28 01:23:02.024578] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.148 [2024-09-28 01:23:02.024591] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.148 [2024-09-28 01:23:02.024604] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.148 [2024-09-28 01:23:02.025268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.148 [2024-09-28 01:23:02.025428] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.148 [2024-09-28 01:23:02.025491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.406 [2024-09-28 01:23:02.250269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:06.974 01:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:06.974 01:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:10:06.974 01:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:06.974 01:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:06.974 01:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:06.974 01:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.974 01:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:07.232 [2024-09-28 01:23:02.955613] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.232 01:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.490 01:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:07.491 01:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:08.057 01:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:08.057 01:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:08.317 01:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:08.635 01:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e4c85168-752d-42ad-ba05-9d9f494c6722 00:10:08.635 01:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e4c85168-752d-42ad-ba05-9d9f494c6722 lvol 20 00:10:08.894 01:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=cef80c97-603a-4782-a0b8-aedbe56aac5f 00:10:08.894 01:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:09.153 01:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cef80c97-603a-4782-a0b8-aedbe56aac5f 00:10:09.410 01:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:09.669 [2024-09-28 01:23:05.522800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:09.669 01:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:09.928 01:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65334 00:10:09.928 01:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:09.928 01:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:11.304 01:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot cef80c97-603a-4782-a0b8-aedbe56aac5f MY_SNAPSHOT 00:10:11.304 01:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=746a470c-bba2-4fe7-9413-ebdfe42a215f 00:10:11.304 01:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize cef80c97-603a-4782-a0b8-aedbe56aac5f 30 00:10:11.564 01:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 746a470c-bba2-4fe7-9413-ebdfe42a215f MY_CLONE 00:10:11.823 01:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7c49c224-1e60-483f-97f3-a07a44b845c0 00:10:11.823 01:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 7c49c224-1e60-483f-97f3-a07a44b845c0 00:10:12.390 01:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65334 00:10:20.507 Initializing NVMe Controllers 00:10:20.507 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:10:20.507 Controller IO queue size 128, less than required. 00:10:20.507 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:20.507 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:20.507 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:20.507 Initialization complete. Launching workers. 00:10:20.507 ======================================================== 00:10:20.507 Latency(us) 00:10:20.507 Device Information : IOPS MiB/s Average min max 00:10:20.507 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8953.70 34.98 14306.25 457.22 156004.13 00:10:20.507 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8897.60 34.76 14396.70 5225.93 124078.72 00:10:20.507 ======================================================== 00:10:20.507 Total : 17851.30 69.73 14351.33 457.22 156004.13 00:10:20.507 00:10:20.507 01:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:20.766 01:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete cef80c97-603a-4782-a0b8-aedbe56aac5f 00:10:21.026 01:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e4c85168-752d-42ad-ba05-9d9f494c6722 00:10:21.285 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:21.285 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:21.285 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:21.285 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:21.285 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:21.285 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:21.285 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:21.285 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:21.286 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:21.286 rmmod nvme_tcp 00:10:21.286 rmmod nvme_fabrics 00:10:21.286 rmmod nvme_keyring 00:10:21.286 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:21.286 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:21.286 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:21.286 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 65248 ']' 00:10:21.286 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 65248 00:10:21.286 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 65248 ']' 00:10:21.286 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 65248 00:10:21.286 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:10:21.286 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:21.286 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65248 00:10:21.286 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:21.286 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:21.286 killing process with pid 65248 00:10:21.286 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65248' 00:10:21.286 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 65248 00:10:21.286 01:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 65248 00:10:23.192 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:23.192 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:23.192 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:23.192 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:23.192 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:10:23.192 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:10:23.192 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:23.192 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:23.192 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:23.192 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:23.192 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:23.192 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:23.192 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:23.192 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:23.192 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:23.192 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:10:23.193 00:10:23.193 real 0m17.974s 00:10:23.193 user 1m10.810s 00:10:23.193 sys 0m4.132s 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:23.193 ************************************ 00:10:23.193 END TEST nvmf_lvol 00:10:23.193 ************************************ 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:23.193 ************************************ 00:10:23.193 START TEST nvmf_lvs_grow 00:10:23.193 ************************************ 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:23.193 * Looking for test storage... 00:10:23.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:23.193 01:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:23.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.193 --rc genhtml_branch_coverage=1 00:10:23.193 --rc genhtml_function_coverage=1 00:10:23.193 --rc genhtml_legend=1 00:10:23.193 --rc geninfo_all_blocks=1 00:10:23.193 --rc geninfo_unexecuted_blocks=1 00:10:23.193 00:10:23.193 ' 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:23.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.193 --rc genhtml_branch_coverage=1 00:10:23.193 --rc genhtml_function_coverage=1 00:10:23.193 --rc genhtml_legend=1 00:10:23.193 --rc geninfo_all_blocks=1 00:10:23.193 --rc geninfo_unexecuted_blocks=1 00:10:23.193 00:10:23.193 ' 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:23.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.193 --rc genhtml_branch_coverage=1 00:10:23.193 --rc genhtml_function_coverage=1 00:10:23.193 --rc genhtml_legend=1 00:10:23.193 --rc geninfo_all_blocks=1 00:10:23.193 --rc geninfo_unexecuted_blocks=1 00:10:23.193 00:10:23.193 ' 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:23.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.193 --rc genhtml_branch_coverage=1 00:10:23.193 --rc genhtml_function_coverage=1 00:10:23.193 --rc genhtml_legend=1 00:10:23.193 --rc geninfo_all_blocks=1 00:10:23.193 --rc geninfo_unexecuted_blocks=1 00:10:23.193 00:10:23.193 ' 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:23.193 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.194 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:23.194 Cannot find device "nvmf_init_br" 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:23.194 Cannot find device "nvmf_init_br2" 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:23.194 Cannot find device "nvmf_tgt_br" 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:10:23.194 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:23.453 Cannot find device "nvmf_tgt_br2" 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:23.453 Cannot find device "nvmf_init_br" 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:23.453 Cannot find device "nvmf_init_br2" 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:23.453 Cannot find device "nvmf_tgt_br" 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:23.453 Cannot find device "nvmf_tgt_br2" 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:23.453 Cannot find device "nvmf_br" 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:23.453 Cannot find device "nvmf_init_if" 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:23.453 Cannot find device "nvmf_init_if2" 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:23.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:23.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:23.453 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:23.712 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:23.712 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:10:23.712 00:10:23.712 --- 10.0.0.3 ping statistics --- 00:10:23.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.712 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:23.712 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:23.712 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:10:23.712 00:10:23.712 --- 10.0.0.4 ping statistics --- 00:10:23.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.712 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:23.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:23.712 00:10:23.712 --- 10.0.0.1 ping statistics --- 00:10:23.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.712 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:23.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:10:23.712 00:10:23.712 --- 10.0.0.2 ping statistics --- 00:10:23.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.712 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # return 0 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:23.712 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=65730 00:10:23.713 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:23.713 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 65730 00:10:23.713 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 65730 ']' 00:10:23.713 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.713 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.713 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.713 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.713 01:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:23.713 [2024-09-28 01:23:19.603650] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:23.713 [2024-09-28 01:23:19.603810] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.971 [2024-09-28 01:23:19.776513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.230 [2024-09-28 01:23:19.963133] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.230 [2024-09-28 01:23:19.963200] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.230 [2024-09-28 01:23:19.963223] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.230 [2024-09-28 01:23:19.963242] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.230 [2024-09-28 01:23:19.963255] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.230 [2024-09-28 01:23:19.963296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.230 [2024-09-28 01:23:20.143891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:24.795 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:24.795 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:10:24.795 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:24.795 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:24.795 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:24.795 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.795 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:25.054 [2024-09-28 01:23:20.823911] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.055 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:25.055 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:25.055 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.055 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:25.055 ************************************ 00:10:25.055 START TEST lvs_grow_clean 00:10:25.055 ************************************ 00:10:25.055 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:10:25.055 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:25.055 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:25.055 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:25.055 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:25.055 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:25.055 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:25.055 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:25.055 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:25.055 01:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:25.314 01:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:25.314 01:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:25.574 01:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5a8d4119-96c2-48fe-9ca3-be875473def3 00:10:25.574 01:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a8d4119-96c2-48fe-9ca3-be875473def3 00:10:25.574 01:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:25.834 01:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:25.834 01:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:25.834 01:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5a8d4119-96c2-48fe-9ca3-be875473def3 lvol 150 00:10:26.093 01:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ce708960-98a4-46d6-aa0f-b1505668f0c7 00:10:26.093 01:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:26.093 01:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:26.352 [2024-09-28 01:23:22.090695] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:26.352 [2024-09-28 01:23:22.090798] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:26.352 true 00:10:26.352 01:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a8d4119-96c2-48fe-9ca3-be875473def3 00:10:26.352 01:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:26.612 01:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:26.612 01:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:26.898 01:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ce708960-98a4-46d6-aa0f-b1505668f0c7 00:10:27.189 01:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:27.448 [2024-09-28 01:23:23.184015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:27.448 01:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:27.707 01:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65813 00:10:27.707 01:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:27.707 01:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:27.707 01:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65813 /var/tmp/bdevperf.sock 00:10:27.707 01:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 65813 ']' 00:10:27.707 01:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:27.707 01:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:27.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:27.707 01:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:27.708 01:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:27.708 01:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:27.708 [2024-09-28 01:23:23.530989] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:27.708 [2024-09-28 01:23:23.531142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65813 ] 00:10:27.966 [2024-09-28 01:23:23.694435] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.225 [2024-09-28 01:23:23.905661] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.225 [2024-09-28 01:23:24.066243] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:28.792 01:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:28.792 01:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:10:28.792 01:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:29.050 Nvme0n1 00:10:29.050 01:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:29.309 [ 00:10:29.309 { 00:10:29.309 "name": "Nvme0n1", 00:10:29.309 "aliases": [ 00:10:29.309 "ce708960-98a4-46d6-aa0f-b1505668f0c7" 00:10:29.309 ], 00:10:29.309 "product_name": "NVMe disk", 00:10:29.309 "block_size": 4096, 00:10:29.309 "num_blocks": 38912, 00:10:29.309 "uuid": "ce708960-98a4-46d6-aa0f-b1505668f0c7", 00:10:29.309 "numa_id": -1, 00:10:29.309 "assigned_rate_limits": { 00:10:29.309 "rw_ios_per_sec": 0, 00:10:29.309 "rw_mbytes_per_sec": 0, 00:10:29.309 "r_mbytes_per_sec": 0, 00:10:29.309 "w_mbytes_per_sec": 0 00:10:29.309 }, 00:10:29.309 "claimed": false, 00:10:29.309 "zoned": false, 00:10:29.309 "supported_io_types": { 00:10:29.309 "read": true, 00:10:29.309 "write": true, 00:10:29.309 "unmap": true, 00:10:29.309 "flush": true, 00:10:29.309 "reset": true, 00:10:29.309 "nvme_admin": true, 00:10:29.309 "nvme_io": true, 00:10:29.309 "nvme_io_md": false, 00:10:29.309 "write_zeroes": true, 00:10:29.309 "zcopy": false, 00:10:29.309 "get_zone_info": false, 00:10:29.309 "zone_management": false, 00:10:29.309 "zone_append": false, 00:10:29.309 "compare": true, 00:10:29.309 "compare_and_write": true, 00:10:29.309 "abort": true, 00:10:29.309 "seek_hole": false, 00:10:29.309 "seek_data": false, 00:10:29.309 "copy": true, 00:10:29.309 "nvme_iov_md": false 00:10:29.309 }, 00:10:29.309 "memory_domains": [ 00:10:29.309 { 00:10:29.309 "dma_device_id": "system", 00:10:29.309 "dma_device_type": 1 00:10:29.309 } 00:10:29.309 ], 00:10:29.309 "driver_specific": { 00:10:29.309 "nvme": [ 00:10:29.309 { 00:10:29.309 "trid": { 00:10:29.309 "trtype": "TCP", 00:10:29.309 "adrfam": "IPv4", 00:10:29.309 "traddr": "10.0.0.3", 00:10:29.309 "trsvcid": "4420", 00:10:29.309 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:29.309 }, 00:10:29.309 "ctrlr_data": { 00:10:29.309 "cntlid": 1, 00:10:29.309 "vendor_id": "0x8086", 00:10:29.309 "model_number": "SPDK bdev Controller", 00:10:29.309 "serial_number": "SPDK0", 00:10:29.309 "firmware_revision": "25.01", 00:10:29.309 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:29.309 "oacs": { 00:10:29.309 "security": 0, 00:10:29.309 "format": 0, 00:10:29.309 "firmware": 0, 00:10:29.310 "ns_manage": 0 00:10:29.310 }, 00:10:29.310 "multi_ctrlr": true, 00:10:29.310 "ana_reporting": false 00:10:29.310 }, 00:10:29.310 "vs": { 00:10:29.310 "nvme_version": "1.3" 00:10:29.310 }, 00:10:29.310 "ns_data": { 00:10:29.310 "id": 1, 00:10:29.310 "can_share": true 00:10:29.310 } 00:10:29.310 } 00:10:29.310 ], 00:10:29.310 "mp_policy": "active_passive" 00:10:29.310 } 00:10:29.310 } 00:10:29.310 ] 00:10:29.310 01:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65836 00:10:29.310 01:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:29.310 01:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:29.569 Running I/O for 10 seconds... 00:10:30.503 Latency(us) 00:10:30.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.503 Nvme0n1 : 1.00 5715.00 22.32 0.00 0.00 0.00 0.00 0.00 00:10:30.503 =================================================================================================================== 00:10:30.503 Total : 5715.00 22.32 0.00 0.00 0.00 0.00 0.00 00:10:30.503 00:10:31.439 01:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5a8d4119-96c2-48fe-9ca3-be875473def3 00:10:31.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.439 Nvme0n1 : 2.00 5778.50 22.57 0.00 0.00 0.00 0.00 0.00 00:10:31.439 =================================================================================================================== 00:10:31.439 Total : 5778.50 22.57 0.00 0.00 0.00 0.00 0.00 00:10:31.439 00:10:31.698 true 00:10:31.698 01:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a8d4119-96c2-48fe-9ca3-be875473def3 00:10:31.698 01:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:31.956 01:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:31.956 01:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:31.956 01:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65836 00:10:32.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.523 Nvme0n1 : 3.00 5730.33 22.38 0.00 0.00 0.00 0.00 0.00 00:10:32.523 =================================================================================================================== 00:10:32.523 Total : 5730.33 22.38 0.00 0.00 0.00 0.00 0.00 00:10:32.523 00:10:33.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.457 Nvme0n1 : 4.00 5694.75 22.25 0.00 0.00 0.00 0.00 0.00 00:10:33.457 =================================================================================================================== 00:10:33.457 Total : 5694.75 22.25 0.00 0.00 0.00 0.00 0.00 00:10:33.457 00:10:34.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.389 Nvme0n1 : 5.00 5673.40 22.16 0.00 0.00 0.00 0.00 0.00 00:10:34.389 =================================================================================================================== 00:10:34.389 Total : 5673.40 22.16 0.00 0.00 0.00 0.00 0.00 00:10:34.389 00:10:35.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.767 Nvme0n1 : 6.00 5638.00 22.02 0.00 0.00 0.00 0.00 0.00 00:10:35.767 =================================================================================================================== 00:10:35.767 Total : 5638.00 22.02 0.00 0.00 0.00 0.00 0.00 00:10:35.767 00:10:36.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.334 Nvme0n1 : 7.00 5612.71 21.92 0.00 0.00 0.00 0.00 0.00 00:10:36.334 =================================================================================================================== 00:10:36.334 Total : 5612.71 21.92 0.00 0.00 0.00 0.00 0.00 00:10:36.334 00:10:37.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.710 Nvme0n1 : 8.00 5609.62 21.91 0.00 0.00 0.00 0.00 0.00 00:10:37.710 =================================================================================================================== 00:10:37.710 Total : 5609.62 21.91 0.00 0.00 0.00 0.00 0.00 00:10:37.710 00:10:38.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.646 Nvme0n1 : 9.00 5607.22 21.90 0.00 0.00 0.00 0.00 0.00 00:10:38.646 =================================================================================================================== 00:10:38.646 Total : 5607.22 21.90 0.00 0.00 0.00 0.00 0.00 00:10:38.646 00:10:39.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.610 Nvme0n1 : 10.00 5592.60 21.85 0.00 0.00 0.00 0.00 0.00 00:10:39.610 =================================================================================================================== 00:10:39.610 Total : 5592.60 21.85 0.00 0.00 0.00 0.00 0.00 00:10:39.610 00:10:39.610 00:10:39.610 Latency(us) 00:10:39.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.610 Nvme0n1 : 10.00 5603.48 21.89 0.00 0.00 22836.63 14417.92 68634.07 00:10:39.610 =================================================================================================================== 00:10:39.610 Total : 5603.48 21.89 0.00 0.00 22836.63 14417.92 68634.07 00:10:39.610 { 00:10:39.610 "results": [ 00:10:39.610 { 00:10:39.610 "job": "Nvme0n1", 00:10:39.610 "core_mask": "0x2", 00:10:39.610 "workload": "randwrite", 00:10:39.610 "status": "finished", 00:10:39.610 "queue_depth": 128, 00:10:39.610 "io_size": 4096, 00:10:39.610 "runtime": 10.003419, 00:10:39.610 "iops": 5603.4841687627, 00:10:39.610 "mibps": 21.8886100342293, 00:10:39.610 "io_failed": 0, 00:10:39.610 "io_timeout": 0, 00:10:39.610 "avg_latency_us": 22836.634145385782, 00:10:39.610 "min_latency_us": 14417.92, 00:10:39.610 "max_latency_us": 68634.06545454546 00:10:39.610 } 00:10:39.610 ], 00:10:39.610 "core_count": 1 00:10:39.610 } 00:10:39.610 01:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65813 00:10:39.610 01:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 65813 ']' 00:10:39.610 01:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 65813 00:10:39.610 01:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:10:39.610 01:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:39.610 01:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65813 00:10:39.610 01:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:39.610 01:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:39.610 killing process with pid 65813 00:10:39.610 01:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65813' 00:10:39.611 01:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 65813 00:10:39.611 Received shutdown signal, test time was about 10.000000 seconds 00:10:39.611 00:10:39.611 Latency(us) 00:10:39.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.611 =================================================================================================================== 00:10:39.611 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:39.611 01:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 65813 00:10:40.546 01:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:40.806 01:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:41.064 01:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a8d4119-96c2-48fe-9ca3-be875473def3 00:10:41.064 01:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:41.323 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:41.323 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:41.323 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:41.582 [2024-09-28 01:23:37.488254] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:41.842 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a8d4119-96c2-48fe-9ca3-be875473def3 00:10:41.842 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:41.842 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a8d4119-96c2-48fe-9ca3-be875473def3 00:10:41.842 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.842 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:41.842 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.842 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:41.842 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.842 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:41.842 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.842 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:41.842 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a8d4119-96c2-48fe-9ca3-be875473def3 00:10:42.102 request: 00:10:42.102 { 00:10:42.102 "uuid": "5a8d4119-96c2-48fe-9ca3-be875473def3", 00:10:42.102 "method": "bdev_lvol_get_lvstores", 00:10:42.102 "req_id": 1 00:10:42.102 } 00:10:42.102 Got JSON-RPC error response 00:10:42.102 response: 00:10:42.102 { 00:10:42.102 "code": -19, 00:10:42.102 "message": "No such device" 00:10:42.102 } 00:10:42.102 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:42.102 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:42.102 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:42.102 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:42.102 01:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:42.361 aio_bdev 00:10:42.361 01:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ce708960-98a4-46d6-aa0f-b1505668f0c7 00:10:42.361 01:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=ce708960-98a4-46d6-aa0f-b1505668f0c7 00:10:42.361 01:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.361 01:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:10:42.361 01:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.361 01:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.361 01:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:42.620 01:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ce708960-98a4-46d6-aa0f-b1505668f0c7 -t 2000 00:10:42.620 [ 00:10:42.620 { 00:10:42.620 "name": "ce708960-98a4-46d6-aa0f-b1505668f0c7", 00:10:42.620 "aliases": [ 00:10:42.620 "lvs/lvol" 00:10:42.620 ], 00:10:42.620 "product_name": "Logical Volume", 00:10:42.620 "block_size": 4096, 00:10:42.620 "num_blocks": 38912, 00:10:42.620 "uuid": "ce708960-98a4-46d6-aa0f-b1505668f0c7", 00:10:42.620 "assigned_rate_limits": { 00:10:42.620 "rw_ios_per_sec": 0, 00:10:42.620 "rw_mbytes_per_sec": 0, 00:10:42.620 "r_mbytes_per_sec": 0, 00:10:42.620 "w_mbytes_per_sec": 0 00:10:42.620 }, 00:10:42.620 "claimed": false, 00:10:42.620 "zoned": false, 00:10:42.620 "supported_io_types": { 00:10:42.620 "read": true, 00:10:42.620 "write": true, 00:10:42.620 "unmap": true, 00:10:42.620 "flush": false, 00:10:42.620 "reset": true, 00:10:42.620 "nvme_admin": false, 00:10:42.620 "nvme_io": false, 00:10:42.620 "nvme_io_md": false, 00:10:42.620 "write_zeroes": true, 00:10:42.620 "zcopy": false, 00:10:42.620 "get_zone_info": false, 00:10:42.620 "zone_management": false, 00:10:42.620 "zone_append": false, 00:10:42.620 "compare": false, 00:10:42.620 "compare_and_write": false, 00:10:42.620 "abort": false, 00:10:42.620 "seek_hole": true, 00:10:42.620 "seek_data": true, 00:10:42.620 "copy": false, 00:10:42.620 "nvme_iov_md": false 00:10:42.620 }, 00:10:42.620 "driver_specific": { 00:10:42.620 "lvol": { 00:10:42.620 "lvol_store_uuid": "5a8d4119-96c2-48fe-9ca3-be875473def3", 00:10:42.620 "base_bdev": "aio_bdev", 00:10:42.620 "thin_provision": false, 00:10:42.620 "num_allocated_clusters": 38, 00:10:42.620 "snapshot": false, 00:10:42.620 "clone": false, 00:10:42.620 "esnap_clone": false 00:10:42.620 } 00:10:42.620 } 00:10:42.620 } 00:10:42.620 ] 00:10:42.879 01:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:10:42.879 01:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a8d4119-96c2-48fe-9ca3-be875473def3 00:10:42.879 01:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:42.879 01:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:42.879 01:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a8d4119-96c2-48fe-9ca3-be875473def3 00:10:42.879 01:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:43.448 01:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:43.448 01:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ce708960-98a4-46d6-aa0f-b1505668f0c7 00:10:43.707 01:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5a8d4119-96c2-48fe-9ca3-be875473def3 00:10:43.966 01:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:44.225 01:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:44.484 ************************************ 00:10:44.484 END TEST lvs_grow_clean 00:10:44.484 ************************************ 00:10:44.484 00:10:44.484 real 0m19.393s 00:10:44.484 user 0m18.652s 00:10:44.484 sys 0m2.374s 00:10:44.484 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:44.484 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:44.484 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:44.484 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:44.484 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:44.484 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:44.484 ************************************ 00:10:44.484 START TEST lvs_grow_dirty 00:10:44.484 ************************************ 00:10:44.484 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:10:44.484 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:44.484 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:44.485 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:44.485 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:44.485 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:44.485 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:44.485 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:44.485 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:44.485 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:44.743 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:44.744 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:45.003 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d51a730f-22c8-408a-a4de-57d367042216 00:10:45.003 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d51a730f-22c8-408a-a4de-57d367042216 00:10:45.003 01:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:45.261 01:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:45.261 01:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:45.261 01:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d51a730f-22c8-408a-a4de-57d367042216 lvol 150 00:10:45.520 01:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=321352cd-52bf-4006-ae81-4209fb8d97e5 00:10:45.520 01:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:45.520 01:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:45.779 [2024-09-28 01:23:41.671868] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:45.779 [2024-09-28 01:23:41.672001] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:45.779 true 00:10:45.779 01:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d51a730f-22c8-408a-a4de-57d367042216 00:10:45.780 01:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:46.039 01:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:46.039 01:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:46.298 01:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 321352cd-52bf-4006-ae81-4209fb8d97e5 00:10:46.866 01:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:46.866 [2024-09-28 01:23:42.792630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:47.125 01:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:47.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:47.384 01:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66100 00:10:47.384 01:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:47.384 01:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:47.384 01:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66100 /var/tmp/bdevperf.sock 00:10:47.384 01:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 66100 ']' 00:10:47.384 01:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:47.384 01:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:47.384 01:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:47.384 01:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:47.384 01:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:47.384 [2024-09-28 01:23:43.193102] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:47.384 [2024-09-28 01:23:43.193259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66100 ] 00:10:47.644 [2024-09-28 01:23:43.354783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.644 [2024-09-28 01:23:43.523956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.903 [2024-09-28 01:23:43.695448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.471 01:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.471 01:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:48.471 01:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:48.730 Nvme0n1 00:10:48.730 01:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:48.990 [ 00:10:48.990 { 00:10:48.990 "name": "Nvme0n1", 00:10:48.990 "aliases": [ 00:10:48.990 "321352cd-52bf-4006-ae81-4209fb8d97e5" 00:10:48.990 ], 00:10:48.990 "product_name": "NVMe disk", 00:10:48.990 "block_size": 4096, 00:10:48.990 "num_blocks": 38912, 00:10:48.990 "uuid": "321352cd-52bf-4006-ae81-4209fb8d97e5", 00:10:48.990 "numa_id": -1, 00:10:48.990 "assigned_rate_limits": { 00:10:48.990 "rw_ios_per_sec": 0, 00:10:48.990 "rw_mbytes_per_sec": 0, 00:10:48.990 "r_mbytes_per_sec": 0, 00:10:48.990 "w_mbytes_per_sec": 0 00:10:48.990 }, 00:10:48.990 "claimed": false, 00:10:48.990 "zoned": false, 00:10:48.990 "supported_io_types": { 00:10:48.990 "read": true, 00:10:48.990 "write": true, 00:10:48.990 "unmap": true, 00:10:48.990 "flush": true, 00:10:48.990 "reset": true, 00:10:48.990 "nvme_admin": true, 00:10:48.990 "nvme_io": true, 00:10:48.990 "nvme_io_md": false, 00:10:48.990 "write_zeroes": true, 00:10:48.990 "zcopy": false, 00:10:48.990 "get_zone_info": false, 00:10:48.990 "zone_management": false, 00:10:48.990 "zone_append": false, 00:10:48.990 "compare": true, 00:10:48.990 "compare_and_write": true, 00:10:48.990 "abort": true, 00:10:48.990 "seek_hole": false, 00:10:48.990 "seek_data": false, 00:10:48.990 "copy": true, 00:10:48.990 "nvme_iov_md": false 00:10:48.990 }, 00:10:48.990 "memory_domains": [ 00:10:48.990 { 00:10:48.990 "dma_device_id": "system", 00:10:48.990 "dma_device_type": 1 00:10:48.990 } 00:10:48.990 ], 00:10:48.990 "driver_specific": { 00:10:48.990 "nvme": [ 00:10:48.990 { 00:10:48.990 "trid": { 00:10:48.990 "trtype": "TCP", 00:10:48.990 "adrfam": "IPv4", 00:10:48.990 "traddr": "10.0.0.3", 00:10:48.990 "trsvcid": "4420", 00:10:48.990 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:48.990 }, 00:10:48.990 "ctrlr_data": { 00:10:48.990 "cntlid": 1, 00:10:48.990 "vendor_id": "0x8086", 00:10:48.990 "model_number": "SPDK bdev Controller", 00:10:48.990 "serial_number": "SPDK0", 00:10:48.990 "firmware_revision": "25.01", 00:10:48.990 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:48.990 "oacs": { 00:10:48.990 "security": 0, 00:10:48.990 "format": 0, 00:10:48.990 "firmware": 0, 00:10:48.990 "ns_manage": 0 00:10:48.990 }, 00:10:48.990 "multi_ctrlr": true, 00:10:48.990 "ana_reporting": false 00:10:48.990 }, 00:10:48.990 "vs": { 00:10:48.990 "nvme_version": "1.3" 00:10:48.990 }, 00:10:48.990 "ns_data": { 00:10:48.990 "id": 1, 00:10:48.990 "can_share": true 00:10:48.990 } 00:10:48.990 } 00:10:48.990 ], 00:10:48.990 "mp_policy": "active_passive" 00:10:48.990 } 00:10:48.990 } 00:10:48.990 ] 00:10:48.990 01:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66129 00:10:48.990 01:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:48.990 01:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:48.990 Running I/O for 10 seconds... 00:10:50.368 Latency(us) 00:10:50.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:50.368 Nvme0n1 : 1.00 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:10:50.368 =================================================================================================================== 00:10:50.368 Total : 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:10:50.368 00:10:50.935 01:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d51a730f-22c8-408a-a4de-57d367042216 00:10:51.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:51.194 Nvme0n1 : 2.00 5524.50 21.58 0.00 0.00 0.00 0.00 0.00 00:10:51.194 =================================================================================================================== 00:10:51.194 Total : 5524.50 21.58 0.00 0.00 0.00 0.00 0.00 00:10:51.194 00:10:51.194 true 00:10:51.194 01:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d51a730f-22c8-408a-a4de-57d367042216 00:10:51.194 01:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:51.763 01:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:51.763 01:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:51.763 01:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66129 00:10:52.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.022 Nvme0n1 : 3.00 5545.67 21.66 0.00 0.00 0.00 0.00 0.00 00:10:52.022 =================================================================================================================== 00:10:52.022 Total : 5545.67 21.66 0.00 0.00 0.00 0.00 0.00 00:10:52.022 00:10:53.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.401 Nvme0n1 : 4.00 5430.25 21.21 0.00 0.00 0.00 0.00 0.00 00:10:53.401 =================================================================================================================== 00:10:53.401 Total : 5430.25 21.21 0.00 0.00 0.00 0.00 0.00 00:10:53.401 00:10:53.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.969 Nvme0n1 : 5.00 5436.40 21.24 0.00 0.00 0.00 0.00 0.00 00:10:53.969 =================================================================================================================== 00:10:53.969 Total : 5436.40 21.24 0.00 0.00 0.00 0.00 0.00 00:10:53.969 00:10:55.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.345 Nvme0n1 : 6.00 5461.67 21.33 0.00 0.00 0.00 0.00 0.00 00:10:55.345 =================================================================================================================== 00:10:55.345 Total : 5461.67 21.33 0.00 0.00 0.00 0.00 0.00 00:10:55.345 00:10:56.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:56.307 Nvme0n1 : 7.00 5461.57 21.33 0.00 0.00 0.00 0.00 0.00 00:10:56.307 =================================================================================================================== 00:10:56.307 Total : 5461.57 21.33 0.00 0.00 0.00 0.00 0.00 00:10:56.307 00:10:57.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.242 Nvme0n1 : 8.00 5477.38 21.40 0.00 0.00 0.00 0.00 0.00 00:10:57.242 =================================================================================================================== 00:10:57.242 Total : 5477.38 21.40 0.00 0.00 0.00 0.00 0.00 00:10:57.242 00:10:58.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.177 Nvme0n1 : 9.00 5475.56 21.39 0.00 0.00 0.00 0.00 0.00 00:10:58.177 =================================================================================================================== 00:10:58.177 Total : 5475.56 21.39 0.00 0.00 0.00 0.00 0.00 00:10:58.177 00:10:59.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.111 Nvme0n1 : 10.00 5474.10 21.38 0.00 0.00 0.00 0.00 0.00 00:10:59.111 =================================================================================================================== 00:10:59.111 Total : 5474.10 21.38 0.00 0.00 0.00 0.00 0.00 00:10:59.112 00:10:59.112 00:10:59.112 Latency(us) 00:10:59.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.112 Nvme0n1 : 10.02 5475.14 21.39 0.00 0.00 23371.56 5004.57 95325.09 00:10:59.112 =================================================================================================================== 00:10:59.112 Total : 5475.14 21.39 0.00 0.00 23371.56 5004.57 95325.09 00:10:59.112 { 00:10:59.112 "results": [ 00:10:59.112 { 00:10:59.112 "job": "Nvme0n1", 00:10:59.112 "core_mask": "0x2", 00:10:59.112 "workload": "randwrite", 00:10:59.112 "status": "finished", 00:10:59.112 "queue_depth": 128, 00:10:59.112 "io_size": 4096, 00:10:59.112 "runtime": 10.021473, 00:10:59.112 "iops": 5475.143224953058, 00:10:59.112 "mibps": 21.387278222472883, 00:10:59.112 "io_failed": 0, 00:10:59.112 "io_timeout": 0, 00:10:59.112 "avg_latency_us": 23371.56471185087, 00:10:59.112 "min_latency_us": 5004.567272727273, 00:10:59.112 "max_latency_us": 95325.09090909091 00:10:59.112 } 00:10:59.112 ], 00:10:59.112 "core_count": 1 00:10:59.112 } 00:10:59.112 01:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66100 00:10:59.112 01:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 66100 ']' 00:10:59.112 01:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 66100 00:10:59.112 01:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:10:59.112 01:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.112 01:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66100 00:10:59.112 01:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:59.112 01:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:59.112 01:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66100' 00:10:59.112 killing process with pid 66100 00:10:59.112 01:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 66100 00:10:59.112 Received shutdown signal, test time was about 10.000000 seconds 00:10:59.112 00:10:59.112 Latency(us) 00:10:59.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.112 =================================================================================================================== 00:10:59.112 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:59.112 01:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 66100 00:11:00.049 01:23:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:00.617 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:00.617 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d51a730f-22c8-408a-a4de-57d367042216 00:11:00.617 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:00.876 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:00.876 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:00.876 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65730 00:11:00.876 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65730 00:11:00.876 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65730 Killed "${NVMF_APP[@]}" "$@" 00:11:00.876 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:00.876 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:00.876 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:00.876 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:00.876 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:01.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.135 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=66263 00:11:01.135 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 66263 00:11:01.135 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 66263 ']' 00:11:01.135 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.135 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:01.135 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.135 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:01.135 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:01.135 01:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:01.135 [2024-09-28 01:23:56.914508] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:01.135 [2024-09-28 01:23:56.914709] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.393 [2024-09-28 01:23:57.084642] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.393 [2024-09-28 01:23:57.235782] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.393 [2024-09-28 01:23:57.235845] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.393 [2024-09-28 01:23:57.235880] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.393 [2024-09-28 01:23:57.235895] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.393 [2024-09-28 01:23:57.235907] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.393 [2024-09-28 01:23:57.235942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.652 [2024-09-28 01:23:57.387300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.911 01:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:01.911 01:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:11:01.911 01:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:01.911 01:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:01.911 01:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:01.911 01:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.911 01:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:02.169 [2024-09-28 01:23:58.043587] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:02.169 [2024-09-28 01:23:58.043925] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:02.169 [2024-09-28 01:23:58.044186] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:02.169 01:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:02.170 01:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 321352cd-52bf-4006-ae81-4209fb8d97e5 00:11:02.170 01:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=321352cd-52bf-4006-ae81-4209fb8d97e5 00:11:02.170 01:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:02.170 01:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:02.170 01:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:02.170 01:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:02.170 01:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:02.457 01:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 321352cd-52bf-4006-ae81-4209fb8d97e5 -t 2000 00:11:02.745 [ 00:11:02.745 { 00:11:02.745 "name": "321352cd-52bf-4006-ae81-4209fb8d97e5", 00:11:02.745 "aliases": [ 00:11:02.745 "lvs/lvol" 00:11:02.745 ], 00:11:02.745 "product_name": "Logical Volume", 00:11:02.745 "block_size": 4096, 00:11:02.745 "num_blocks": 38912, 00:11:02.745 "uuid": "321352cd-52bf-4006-ae81-4209fb8d97e5", 00:11:02.745 "assigned_rate_limits": { 00:11:02.745 "rw_ios_per_sec": 0, 00:11:02.745 "rw_mbytes_per_sec": 0, 00:11:02.745 "r_mbytes_per_sec": 0, 00:11:02.745 "w_mbytes_per_sec": 0 00:11:02.745 }, 00:11:02.745 "claimed": false, 00:11:02.745 "zoned": false, 00:11:02.745 "supported_io_types": { 00:11:02.745 "read": true, 00:11:02.745 "write": true, 00:11:02.745 "unmap": true, 00:11:02.745 "flush": false, 00:11:02.745 "reset": true, 00:11:02.745 "nvme_admin": false, 00:11:02.745 "nvme_io": false, 00:11:02.745 "nvme_io_md": false, 00:11:02.745 "write_zeroes": true, 00:11:02.745 "zcopy": false, 00:11:02.745 "get_zone_info": false, 00:11:02.745 "zone_management": false, 00:11:02.745 "zone_append": false, 00:11:02.745 "compare": false, 00:11:02.745 "compare_and_write": false, 00:11:02.745 "abort": false, 00:11:02.745 "seek_hole": true, 00:11:02.745 "seek_data": true, 00:11:02.745 "copy": false, 00:11:02.745 "nvme_iov_md": false 00:11:02.745 }, 00:11:02.745 "driver_specific": { 00:11:02.745 "lvol": { 00:11:02.745 "lvol_store_uuid": "d51a730f-22c8-408a-a4de-57d367042216", 00:11:02.745 "base_bdev": "aio_bdev", 00:11:02.745 "thin_provision": false, 00:11:02.745 "num_allocated_clusters": 38, 00:11:02.745 "snapshot": false, 00:11:02.745 "clone": false, 00:11:02.745 "esnap_clone": false 00:11:02.745 } 00:11:02.745 } 00:11:02.745 } 00:11:02.745 ] 00:11:02.745 01:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:02.745 01:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d51a730f-22c8-408a-a4de-57d367042216 00:11:02.745 01:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:03.005 01:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:03.005 01:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d51a730f-22c8-408a-a4de-57d367042216 00:11:03.005 01:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:03.264 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:03.264 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:03.524 [2024-09-28 01:23:59.329372] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:03.524 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d51a730f-22c8-408a-a4de-57d367042216 00:11:03.524 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:11:03.524 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d51a730f-22c8-408a-a4de-57d367042216 00:11:03.524 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.524 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:03.524 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.524 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:03.524 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.524 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:03.524 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.524 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:03.524 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d51a730f-22c8-408a-a4de-57d367042216 00:11:03.783 request: 00:11:03.783 { 00:11:03.783 "uuid": "d51a730f-22c8-408a-a4de-57d367042216", 00:11:03.783 "method": "bdev_lvol_get_lvstores", 00:11:03.783 "req_id": 1 00:11:03.783 } 00:11:03.783 Got JSON-RPC error response 00:11:03.783 response: 00:11:03.783 { 00:11:03.783 "code": -19, 00:11:03.783 "message": "No such device" 00:11:03.783 } 00:11:03.783 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:11:03.783 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:03.783 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:03.783 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:03.784 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:04.043 aio_bdev 00:11:04.043 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 321352cd-52bf-4006-ae81-4209fb8d97e5 00:11:04.043 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=321352cd-52bf-4006-ae81-4209fb8d97e5 00:11:04.043 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:04.043 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:04.043 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:04.043 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:04.043 01:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:04.301 01:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 321352cd-52bf-4006-ae81-4209fb8d97e5 -t 2000 00:11:04.560 [ 00:11:04.560 { 00:11:04.560 "name": "321352cd-52bf-4006-ae81-4209fb8d97e5", 00:11:04.560 "aliases": [ 00:11:04.560 "lvs/lvol" 00:11:04.560 ], 00:11:04.560 "product_name": "Logical Volume", 00:11:04.560 "block_size": 4096, 00:11:04.560 "num_blocks": 38912, 00:11:04.560 "uuid": "321352cd-52bf-4006-ae81-4209fb8d97e5", 00:11:04.560 "assigned_rate_limits": { 00:11:04.560 "rw_ios_per_sec": 0, 00:11:04.560 "rw_mbytes_per_sec": 0, 00:11:04.560 "r_mbytes_per_sec": 0, 00:11:04.560 "w_mbytes_per_sec": 0 00:11:04.560 }, 00:11:04.560 "claimed": false, 00:11:04.560 "zoned": false, 00:11:04.560 "supported_io_types": { 00:11:04.560 "read": true, 00:11:04.560 "write": true, 00:11:04.560 "unmap": true, 00:11:04.561 "flush": false, 00:11:04.561 "reset": true, 00:11:04.561 "nvme_admin": false, 00:11:04.561 "nvme_io": false, 00:11:04.561 "nvme_io_md": false, 00:11:04.561 "write_zeroes": true, 00:11:04.561 "zcopy": false, 00:11:04.561 "get_zone_info": false, 00:11:04.561 "zone_management": false, 00:11:04.561 "zone_append": false, 00:11:04.561 "compare": false, 00:11:04.561 "compare_and_write": false, 00:11:04.561 "abort": false, 00:11:04.561 "seek_hole": true, 00:11:04.561 "seek_data": true, 00:11:04.561 "copy": false, 00:11:04.561 "nvme_iov_md": false 00:11:04.561 }, 00:11:04.561 "driver_specific": { 00:11:04.561 "lvol": { 00:11:04.561 "lvol_store_uuid": "d51a730f-22c8-408a-a4de-57d367042216", 00:11:04.561 "base_bdev": "aio_bdev", 00:11:04.561 "thin_provision": false, 00:11:04.561 "num_allocated_clusters": 38, 00:11:04.561 "snapshot": false, 00:11:04.561 "clone": false, 00:11:04.561 "esnap_clone": false 00:11:04.561 } 00:11:04.561 } 00:11:04.561 } 00:11:04.561 ] 00:11:04.561 01:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:04.561 01:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d51a730f-22c8-408a-a4de-57d367042216 00:11:04.561 01:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:04.819 01:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:04.819 01:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d51a730f-22c8-408a-a4de-57d367042216 00:11:04.819 01:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:05.078 01:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:05.078 01:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 321352cd-52bf-4006-ae81-4209fb8d97e5 00:11:05.337 01:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d51a730f-22c8-408a-a4de-57d367042216 00:11:05.596 01:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:05.855 01:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:06.113 ************************************ 00:11:06.113 END TEST lvs_grow_dirty 00:11:06.113 ************************************ 00:11:06.113 00:11:06.113 real 0m21.708s 00:11:06.113 user 0m46.028s 00:11:06.113 sys 0m9.035s 00:11:06.113 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:06.113 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:06.372 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:06.372 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:11:06.372 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:11:06.372 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:11:06.372 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:06.372 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:11:06.372 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:11:06.372 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:11:06.372 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:06.372 nvmf_trace.0 00:11:06.372 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:11:06.372 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:06.372 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:06.372 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:06.372 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:06.372 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:06.372 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:06.372 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:06.372 rmmod nvme_tcp 00:11:06.372 rmmod nvme_fabrics 00:11:06.372 rmmod nvme_keyring 00:11:06.631 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:06.631 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:06.631 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:06.631 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 66263 ']' 00:11:06.631 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 66263 00:11:06.631 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 66263 ']' 00:11:06.631 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 66263 00:11:06.631 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:11:06.631 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:06.631 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66263 00:11:06.631 killing process with pid 66263 00:11:06.631 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:06.631 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:06.631 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66263' 00:11:06.631 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 66263 00:11:06.631 01:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 66263 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:07.567 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:07.827 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:07.827 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:07.827 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:07.827 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:07.827 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:07.827 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.827 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.827 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.827 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:11:07.827 00:11:07.827 real 0m44.732s 00:11:07.827 user 1m11.877s 00:11:07.827 sys 0m12.251s 00:11:07.827 ************************************ 00:11:07.827 END TEST nvmf_lvs_grow 00:11:07.827 ************************************ 00:11:07.827 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.827 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:07.827 01:24:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:07.827 01:24:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:07.827 01:24:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.827 01:24:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:07.827 ************************************ 00:11:07.827 START TEST nvmf_bdev_io_wait 00:11:07.827 ************************************ 00:11:07.827 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:07.827 * Looking for test storage... 00:11:08.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.087 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:08.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.088 --rc genhtml_branch_coverage=1 00:11:08.088 --rc genhtml_function_coverage=1 00:11:08.088 --rc genhtml_legend=1 00:11:08.088 --rc geninfo_all_blocks=1 00:11:08.088 --rc geninfo_unexecuted_blocks=1 00:11:08.088 00:11:08.088 ' 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:08.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.088 --rc genhtml_branch_coverage=1 00:11:08.088 --rc genhtml_function_coverage=1 00:11:08.088 --rc genhtml_legend=1 00:11:08.088 --rc geninfo_all_blocks=1 00:11:08.088 --rc geninfo_unexecuted_blocks=1 00:11:08.088 00:11:08.088 ' 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:08.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.088 --rc genhtml_branch_coverage=1 00:11:08.088 --rc genhtml_function_coverage=1 00:11:08.088 --rc genhtml_legend=1 00:11:08.088 --rc geninfo_all_blocks=1 00:11:08.088 --rc geninfo_unexecuted_blocks=1 00:11:08.088 00:11:08.088 ' 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:08.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.088 --rc genhtml_branch_coverage=1 00:11:08.088 --rc genhtml_function_coverage=1 00:11:08.088 --rc genhtml_legend=1 00:11:08.088 --rc geninfo_all_blocks=1 00:11:08.088 --rc geninfo_unexecuted_blocks=1 00:11:08.088 00:11:08.088 ' 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.088 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:08.088 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:08.089 Cannot find device "nvmf_init_br" 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:08.089 Cannot find device "nvmf_init_br2" 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:08.089 Cannot find device "nvmf_tgt_br" 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:08.089 Cannot find device "nvmf_tgt_br2" 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:08.089 Cannot find device "nvmf_init_br" 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:08.089 Cannot find device "nvmf_init_br2" 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:08.089 Cannot find device "nvmf_tgt_br" 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:08.089 Cannot find device "nvmf_tgt_br2" 00:11:08.089 01:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:11:08.089 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:08.089 Cannot find device "nvmf_br" 00:11:08.089 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:11:08.089 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:08.348 Cannot find device "nvmf_init_if" 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:08.348 Cannot find device "nvmf_init_if2" 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:08.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:08.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:08.348 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:08.607 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:08.608 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:08.608 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:11:08.608 00:11:08.608 --- 10.0.0.3 ping statistics --- 00:11:08.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.608 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:08.608 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:08.608 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:11:08.608 00:11:08.608 --- 10.0.0.4 ping statistics --- 00:11:08.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.608 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:08.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:08.608 00:11:08.608 --- 10.0.0.1 ping statistics --- 00:11:08.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.608 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:08.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:11:08.608 00:11:08.608 --- 10.0.0.2 ping statistics --- 00:11:08.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.608 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # return 0 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=66643 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 66643 00:11:08.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 66643 ']' 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.608 01:24:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:08.608 [2024-09-28 01:24:04.456385] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:08.608 [2024-09-28 01:24:04.456849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.866 [2024-09-28 01:24:04.631842] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.125 [2024-09-28 01:24:04.805289] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.125 [2024-09-28 01:24:04.805569] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.125 [2024-09-28 01:24:04.805738] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.125 [2024-09-28 01:24:04.805881] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.125 [2024-09-28 01:24:04.805924] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.125 [2024-09-28 01:24:04.806161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.125 [2024-09-28 01:24:04.806556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.125 [2024-09-28 01:24:04.806865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.125 [2024-09-28 01:24:04.806881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.693 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.693 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:11:09.693 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:09.693 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:09.693 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.693 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.693 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:09.693 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.693 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.693 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.693 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:09.693 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.693 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.956 [2024-09-28 01:24:05.703748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.956 [2024-09-28 01:24:05.724396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.956 Malloc0 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.956 [2024-09-28 01:24:05.840496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66685 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66687 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:09.956 { 00:11:09.956 "params": { 00:11:09.956 "name": "Nvme$subsystem", 00:11:09.956 "trtype": "$TEST_TRANSPORT", 00:11:09.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.956 "adrfam": "ipv4", 00:11:09.956 "trsvcid": "$NVMF_PORT", 00:11:09.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.956 "hdgst": ${hdgst:-false}, 00:11:09.956 "ddgst": ${ddgst:-false} 00:11:09.956 }, 00:11:09.956 "method": "bdev_nvme_attach_controller" 00:11:09.956 } 00:11:09.956 EOF 00:11:09.956 )") 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66689 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:09.956 { 00:11:09.956 "params": { 00:11:09.956 "name": "Nvme$subsystem", 00:11:09.956 "trtype": "$TEST_TRANSPORT", 00:11:09.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.956 "adrfam": "ipv4", 00:11:09.956 "trsvcid": "$NVMF_PORT", 00:11:09.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.956 "hdgst": ${hdgst:-false}, 00:11:09.956 "ddgst": ${ddgst:-false} 00:11:09.956 }, 00:11:09.956 "method": "bdev_nvme_attach_controller" 00:11:09.956 } 00:11:09.956 EOF 00:11:09.956 )") 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:09.956 { 00:11:09.956 "params": { 00:11:09.956 "name": "Nvme$subsystem", 00:11:09.956 "trtype": "$TEST_TRANSPORT", 00:11:09.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.956 "adrfam": "ipv4", 00:11:09.956 "trsvcid": "$NVMF_PORT", 00:11:09.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.956 "hdgst": ${hdgst:-false}, 00:11:09.956 "ddgst": ${ddgst:-false} 00:11:09.956 }, 00:11:09.956 "method": "bdev_nvme_attach_controller" 00:11:09.956 } 00:11:09.956 EOF 00:11:09.956 )") 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:09.956 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:09.956 { 00:11:09.956 "params": { 00:11:09.956 "name": "Nvme$subsystem", 00:11:09.956 "trtype": "$TEST_TRANSPORT", 00:11:09.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.956 "adrfam": "ipv4", 00:11:09.956 "trsvcid": "$NVMF_PORT", 00:11:09.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.956 "hdgst": ${hdgst:-false}, 00:11:09.956 "ddgst": ${ddgst:-false} 00:11:09.956 }, 00:11:09.956 "method": "bdev_nvme_attach_controller" 00:11:09.956 } 00:11:09.957 EOF 00:11:09.957 )") 00:11:09.957 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:11:09.957 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66692 00:11:09.957 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:09.957 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:11:09.957 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:11:09.957 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:09.957 "params": { 00:11:09.957 "name": "Nvme1", 00:11:09.957 "trtype": "tcp", 00:11:09.957 "traddr": "10.0.0.3", 00:11:09.957 "adrfam": "ipv4", 00:11:09.957 "trsvcid": "4420", 00:11:09.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.957 "hdgst": false, 00:11:09.957 "ddgst": false 00:11:09.957 }, 00:11:09.957 "method": "bdev_nvme_attach_controller" 00:11:09.957 }' 00:11:09.957 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:11:09.957 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:11:09.957 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:11:09.957 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:11:09.957 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:11:09.957 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:09.957 "params": { 00:11:09.957 "name": "Nvme1", 00:11:09.957 "trtype": "tcp", 00:11:09.957 "traddr": "10.0.0.3", 00:11:09.957 "adrfam": "ipv4", 00:11:09.957 "trsvcid": "4420", 00:11:09.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.957 "hdgst": false, 00:11:09.957 "ddgst": false 00:11:09.957 }, 00:11:09.957 "method": "bdev_nvme_attach_controller" 00:11:09.957 }' 00:11:09.957 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:11:09.957 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:09.957 "params": { 00:11:09.957 "name": "Nvme1", 00:11:09.957 "trtype": "tcp", 00:11:09.957 "traddr": "10.0.0.3", 00:11:09.957 "adrfam": "ipv4", 00:11:09.957 "trsvcid": "4420", 00:11:09.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.957 "hdgst": false, 00:11:09.957 "ddgst": false 00:11:09.957 }, 00:11:09.957 "method": "bdev_nvme_attach_controller" 00:11:09.957 }' 00:11:10.216 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:11:10.216 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:10.216 "params": { 00:11:10.216 "name": "Nvme1", 00:11:10.216 "trtype": "tcp", 00:11:10.216 "traddr": "10.0.0.3", 00:11:10.216 "adrfam": "ipv4", 00:11:10.216 "trsvcid": "4420", 00:11:10.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:10.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:10.216 "hdgst": false, 00:11:10.216 "ddgst": false 00:11:10.216 }, 00:11:10.216 "method": "bdev_nvme_attach_controller" 00:11:10.216 }' 00:11:10.216 01:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66685 00:11:10.216 [2024-09-28 01:24:05.946622] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:10.216 [2024-09-28 01:24:05.946985] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:10.216 [2024-09-28 01:24:05.956880] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:10.216 [2024-09-28 01:24:05.957186] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:10.216 [2024-09-28 01:24:05.957085] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:10.216 [2024-09-28 01:24:05.958085] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:10.216 [2024-09-28 01:24:05.985606] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:10.216 [2024-09-28 01:24:05.985987] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:10.216 [2024-09-28 01:24:06.146561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.474 [2024-09-28 01:24:06.186758] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.474 [2024-09-28 01:24:06.238274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.474 [2024-09-28 01:24:06.285477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.474 [2024-09-28 01:24:06.356961] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:11:10.474 [2024-09-28 01:24:06.359498] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:11:10.733 [2024-09-28 01:24:06.450271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:11:10.733 [2024-09-28 01:24:06.499346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:11:10.733 [2024-09-28 01:24:06.552268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:10.733 [2024-09-28 01:24:06.560098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:10.733 [2024-09-28 01:24:06.630990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:10.991 [2024-09-28 01:24:06.700789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:10.991 Running I/O for 1 seconds... 00:11:10.991 Running I/O for 1 seconds... 00:11:10.991 Running I/O for 1 seconds... 00:11:10.991 Running I/O for 1 seconds... 00:11:11.928 7860.00 IOPS, 30.70 MiB/s 7295.00 IOPS, 28.50 MiB/s 00:11:11.928 Latency(us) 00:11:11.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.928 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:11.928 Nvme1n1 : 1.01 7896.17 30.84 0.00 0.00 16107.39 5630.14 20494.89 00:11:11.928 =================================================================================================================== 00:11:11.928 Total : 7896.17 30.84 0.00 0.00 16107.39 5630.14 20494.89 00:11:11.928 00:11:11.928 Latency(us) 00:11:11.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.928 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:11.928 Nvme1n1 : 1.01 7353.57 28.72 0.00 0.00 17309.66 9413.35 26452.71 00:11:11.928 =================================================================================================================== 00:11:11.928 Total : 7353.57 28.72 0.00 0.00 17309.66 9413.35 26452.71 00:11:11.928 139552.00 IOPS, 545.12 MiB/s 00:11:11.928 Latency(us) 00:11:11.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.928 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:11.928 Nvme1n1 : 1.00 139233.22 543.88 0.00 0.00 914.63 487.80 3902.37 00:11:11.928 =================================================================================================================== 00:11:11.928 Total : 139233.22 543.88 0.00 0.00 914.63 487.80 3902.37 00:11:12.187 7029.00 IOPS, 27.46 MiB/s 00:11:12.187 Latency(us) 00:11:12.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.187 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:12.187 Nvme1n1 : 1.01 7090.05 27.70 0.00 0.00 17950.44 8102.63 32172.22 00:11:12.187 =================================================================================================================== 00:11:12.187 Total : 7090.05 27.70 0.00 0.00 17950.44 8102.63 32172.22 00:11:13.135 01:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66687 00:11:13.135 01:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66689 00:11:13.135 01:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66692 00:11:13.135 01:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.135 01:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.135 01:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:13.135 01:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.135 01:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:13.135 01:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:13.135 01:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:13.135 01:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:13.135 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:13.135 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:13.135 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:13.135 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:13.135 rmmod nvme_tcp 00:11:13.135 rmmod nvme_fabrics 00:11:13.135 rmmod nvme_keyring 00:11:13.394 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:13.394 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:13.394 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:13.394 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 66643 ']' 00:11:13.394 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 66643 00:11:13.394 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 66643 ']' 00:11:13.394 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 66643 00:11:13.394 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:11:13.394 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:13.394 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66643 00:11:13.394 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:13.394 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:13.394 killing process with pid 66643 00:11:13.394 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66643' 00:11:13.394 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 66643 00:11:13.394 01:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 66643 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:14.330 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:14.589 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:14.589 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.589 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.589 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.589 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:11:14.589 00:11:14.589 real 0m6.634s 00:11:14.589 user 0m29.459s 00:11:14.589 sys 0m2.787s 00:11:14.589 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:14.589 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.589 ************************************ 00:11:14.589 END TEST nvmf_bdev_io_wait 00:11:14.589 ************************************ 00:11:14.589 01:24:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:14.589 01:24:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:14.589 01:24:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:14.589 01:24:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:14.589 ************************************ 00:11:14.589 START TEST nvmf_queue_depth 00:11:14.589 ************************************ 00:11:14.589 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:14.589 * Looking for test storage... 00:11:14.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:14.589 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:14.589 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:11:14.589 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:14.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.849 --rc genhtml_branch_coverage=1 00:11:14.849 --rc genhtml_function_coverage=1 00:11:14.849 --rc genhtml_legend=1 00:11:14.849 --rc geninfo_all_blocks=1 00:11:14.849 --rc geninfo_unexecuted_blocks=1 00:11:14.849 00:11:14.849 ' 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:14.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.849 --rc genhtml_branch_coverage=1 00:11:14.849 --rc genhtml_function_coverage=1 00:11:14.849 --rc genhtml_legend=1 00:11:14.849 --rc geninfo_all_blocks=1 00:11:14.849 --rc geninfo_unexecuted_blocks=1 00:11:14.849 00:11:14.849 ' 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:14.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.849 --rc genhtml_branch_coverage=1 00:11:14.849 --rc genhtml_function_coverage=1 00:11:14.849 --rc genhtml_legend=1 00:11:14.849 --rc geninfo_all_blocks=1 00:11:14.849 --rc geninfo_unexecuted_blocks=1 00:11:14.849 00:11:14.849 ' 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:14.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.849 --rc genhtml_branch_coverage=1 00:11:14.849 --rc genhtml_function_coverage=1 00:11:14.849 --rc genhtml_legend=1 00:11:14.849 --rc geninfo_all_blocks=1 00:11:14.849 --rc geninfo_unexecuted_blocks=1 00:11:14.849 00:11:14.849 ' 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:14.849 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.850 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.850 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:14.851 Cannot find device "nvmf_init_br" 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:14.851 Cannot find device "nvmf_init_br2" 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:14.851 Cannot find device "nvmf_tgt_br" 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:14.851 Cannot find device "nvmf_tgt_br2" 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:14.851 Cannot find device "nvmf_init_br" 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:14.851 Cannot find device "nvmf_init_br2" 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:14.851 Cannot find device "nvmf_tgt_br" 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:14.851 Cannot find device "nvmf_tgt_br2" 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:14.851 Cannot find device "nvmf_br" 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:14.851 Cannot find device "nvmf_init_if" 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:14.851 Cannot find device "nvmf_init_if2" 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:14.851 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:14.851 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:14.851 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:15.111 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:15.111 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:11:15.111 00:11:15.111 --- 10.0.0.3 ping statistics --- 00:11:15.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.111 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:15.111 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:15.111 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:11:15.111 00:11:15.111 --- 10.0.0.4 ping statistics --- 00:11:15.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.111 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:15.111 01:24:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:15.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:11:15.111 00:11:15.111 --- 10.0.0.1 ping statistics --- 00:11:15.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.111 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:15.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:11:15.111 00:11:15.111 --- 10.0.0.2 ping statistics --- 00:11:15.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.111 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # return 0 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=66997 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 66997 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 66997 ']' 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.111 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.371 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.371 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.371 01:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:15.371 [2024-09-28 01:24:11.158868] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:15.371 [2024-09-28 01:24:11.159035] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.630 [2024-09-28 01:24:11.336651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.630 [2024-09-28 01:24:11.505479] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.630 [2024-09-28 01:24:11.505552] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.630 [2024-09-28 01:24:11.505570] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.630 [2024-09-28 01:24:11.505585] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.630 [2024-09-28 01:24:11.505596] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.630 [2024-09-28 01:24:11.505632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.890 [2024-09-28 01:24:11.657700] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.459 [2024-09-28 01:24:12.189975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.459 Malloc0 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.459 [2024-09-28 01:24:12.289680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67035 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67035 /var/tmp/bdevperf.sock 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 67035 ']' 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:16.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:16.459 01:24:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.719 [2024-09-28 01:24:12.406517] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:16.719 [2024-09-28 01:24:12.406684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67035 ] 00:11:16.719 [2024-09-28 01:24:12.578126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.978 [2024-09-28 01:24:12.739841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.978 [2024-09-28 01:24:12.896295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:17.546 01:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:17.546 01:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:17.546 01:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:17.546 01:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.546 01:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.546 NVMe0n1 00:11:17.546 01:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.546 01:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:17.805 Running I/O for 10 seconds... 00:11:28.086 6150.00 IOPS, 24.02 MiB/s 6658.00 IOPS, 26.01 MiB/s 6703.00 IOPS, 26.18 MiB/s 6713.25 IOPS, 26.22 MiB/s 6767.20 IOPS, 26.43 MiB/s 6794.83 IOPS, 26.54 MiB/s 6776.00 IOPS, 26.47 MiB/s 6797.75 IOPS, 26.55 MiB/s 6834.67 IOPS, 26.70 MiB/s 6847.00 IOPS, 26.75 MiB/s 00:11:28.086 Latency(us) 00:11:28.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.086 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:28.086 Verification LBA range: start 0x0 length 0x4000 00:11:28.086 NVMe0n1 : 10.11 6863.49 26.81 0.00 0.00 148331.02 22997.18 101044.60 00:11:28.086 =================================================================================================================== 00:11:28.086 Total : 6863.49 26.81 0.00 0.00 148331.02 22997.18 101044.60 00:11:28.086 { 00:11:28.086 "results": [ 00:11:28.086 { 00:11:28.086 "job": "NVMe0n1", 00:11:28.086 "core_mask": "0x1", 00:11:28.086 "workload": "verify", 00:11:28.086 "status": "finished", 00:11:28.086 "verify_range": { 00:11:28.086 "start": 0, 00:11:28.086 "length": 16384 00:11:28.086 }, 00:11:28.086 "queue_depth": 1024, 00:11:28.086 "io_size": 4096, 00:11:28.086 "runtime": 10.106672, 00:11:28.086 "iops": 6863.485824018035, 00:11:28.086 "mibps": 26.810491500070448, 00:11:28.086 "io_failed": 0, 00:11:28.086 "io_timeout": 0, 00:11:28.086 "avg_latency_us": 148331.01863338213, 00:11:28.087 "min_latency_us": 22997.17818181818, 00:11:28.087 "max_latency_us": 101044.59636363636 00:11:28.087 } 00:11:28.087 ], 00:11:28.087 "core_count": 1 00:11:28.087 } 00:11:28.087 01:24:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67035 00:11:28.087 01:24:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 67035 ']' 00:11:28.087 01:24:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 67035 00:11:28.087 01:24:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:28.087 01:24:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:28.087 01:24:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67035 00:11:28.087 01:24:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:28.087 01:24:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:28.087 killing process with pid 67035 00:11:28.087 01:24:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67035' 00:11:28.087 Received shutdown signal, test time was about 10.000000 seconds 00:11:28.087 00:11:28.087 Latency(us) 00:11:28.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.087 =================================================================================================================== 00:11:28.087 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:28.087 01:24:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 67035 00:11:28.087 01:24:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 67035 00:11:29.024 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:29.024 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:29.024 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:29.024 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:29.024 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:29.025 rmmod nvme_tcp 00:11:29.025 rmmod nvme_fabrics 00:11:29.025 rmmod nvme_keyring 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 66997 ']' 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 66997 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 66997 ']' 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 66997 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66997 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:29.025 killing process with pid 66997 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66997' 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 66997 00:11:29.025 01:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 66997 00:11:29.962 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:29.962 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:29.962 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:29.962 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:29.962 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:11:29.962 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:29.962 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:11:29.962 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:29.962 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:29.962 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:30.221 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:30.221 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:30.221 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:30.221 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:30.221 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:30.221 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:30.221 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:30.221 01:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:30.221 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:30.221 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:30.221 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:30.221 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:30.221 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:30.221 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.221 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.221 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.221 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:11:30.221 00:11:30.221 real 0m15.749s 00:11:30.221 user 0m26.362s 00:11:30.221 sys 0m2.334s 00:11:30.221 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.221 ************************************ 00:11:30.221 END TEST nvmf_queue_depth 00:11:30.221 ************************************ 00:11:30.221 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:30.481 ************************************ 00:11:30.481 START TEST nvmf_target_multipath 00:11:30.481 ************************************ 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:30.481 * Looking for test storage... 00:11:30.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.481 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:30.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.482 --rc genhtml_branch_coverage=1 00:11:30.482 --rc genhtml_function_coverage=1 00:11:30.482 --rc genhtml_legend=1 00:11:30.482 --rc geninfo_all_blocks=1 00:11:30.482 --rc geninfo_unexecuted_blocks=1 00:11:30.482 00:11:30.482 ' 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:30.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.482 --rc genhtml_branch_coverage=1 00:11:30.482 --rc genhtml_function_coverage=1 00:11:30.482 --rc genhtml_legend=1 00:11:30.482 --rc geninfo_all_blocks=1 00:11:30.482 --rc geninfo_unexecuted_blocks=1 00:11:30.482 00:11:30.482 ' 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:30.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.482 --rc genhtml_branch_coverage=1 00:11:30.482 --rc genhtml_function_coverage=1 00:11:30.482 --rc genhtml_legend=1 00:11:30.482 --rc geninfo_all_blocks=1 00:11:30.482 --rc geninfo_unexecuted_blocks=1 00:11:30.482 00:11:30.482 ' 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:30.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.482 --rc genhtml_branch_coverage=1 00:11:30.482 --rc genhtml_function_coverage=1 00:11:30.482 --rc genhtml_legend=1 00:11:30.482 --rc geninfo_all_blocks=1 00:11:30.482 --rc geninfo_unexecuted_blocks=1 00:11:30.482 00:11:30.482 ' 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:30.482 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:30.482 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:30.483 Cannot find device "nvmf_init_br" 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:30.483 Cannot find device "nvmf_init_br2" 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:30.483 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:30.742 Cannot find device "nvmf_tgt_br" 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:30.742 Cannot find device "nvmf_tgt_br2" 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:30.742 Cannot find device "nvmf_init_br" 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:30.742 Cannot find device "nvmf_init_br2" 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:30.742 Cannot find device "nvmf_tgt_br" 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:30.742 Cannot find device "nvmf_tgt_br2" 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:30.742 Cannot find device "nvmf_br" 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:30.742 Cannot find device "nvmf_init_if" 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:30.742 Cannot find device "nvmf_init_if2" 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:30.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:30.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:30.742 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:31.001 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:31.001 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:31.001 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:31.001 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:31.002 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:31.002 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:11:31.002 00:11:31.002 --- 10.0.0.3 ping statistics --- 00:11:31.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.002 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:31.002 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:31.002 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:11:31.002 00:11:31.002 --- 10.0.0.4 ping statistics --- 00:11:31.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.002 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:31.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:31.002 00:11:31.002 --- 10.0.0.1 ping statistics --- 00:11:31.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.002 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:31.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:11:31.002 00:11:31.002 --- 10.0.0.2 ping statistics --- 00:11:31.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.002 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # return 0 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # nvmfpid=67439 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # waitforlisten 67439 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 67439 ']' 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:31.002 01:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:31.002 [2024-09-28 01:24:26.911556] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:31.002 [2024-09-28 01:24:26.911722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.261 [2024-09-28 01:24:27.088886] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.520 [2024-09-28 01:24:27.309270] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.520 [2024-09-28 01:24:27.309337] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.520 [2024-09-28 01:24:27.309353] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.520 [2024-09-28 01:24:27.309363] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.520 [2024-09-28 01:24:27.309374] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.520 [2024-09-28 01:24:27.309530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.520 [2024-09-28 01:24:27.310396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.520 [2024-09-28 01:24:27.310584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.520 [2024-09-28 01:24:27.310588] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.780 [2024-09-28 01:24:27.477440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:32.038 01:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:32.038 01:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:11:32.038 01:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:32.038 01:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:32.038 01:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:32.038 01:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.038 01:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:32.297 [2024-09-28 01:24:28.153066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.297 01:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:32.565 Malloc0 00:11:32.565 01:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:33.148 01:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:33.148 01:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:33.408 [2024-09-28 01:24:29.251136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:33.408 01:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:11:33.667 [2024-09-28 01:24:29.483368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:11:33.667 01:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:33.926 01:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:11:33.926 01:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:33.926 01:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:11:33.927 01:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.927 01:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:33.927 01:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:11:35.832 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67530 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:36.092 01:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:36.092 [global] 00:11:36.092 thread=1 00:11:36.092 invalidate=1 00:11:36.092 rw=randrw 00:11:36.092 time_based=1 00:11:36.092 runtime=6 00:11:36.092 ioengine=libaio 00:11:36.092 direct=1 00:11:36.092 bs=4096 00:11:36.092 iodepth=128 00:11:36.092 norandommap=0 00:11:36.092 numjobs=1 00:11:36.092 00:11:36.092 verify_dump=1 00:11:36.092 verify_backlog=512 00:11:36.092 verify_state_save=0 00:11:36.092 do_verify=1 00:11:36.092 verify=crc32c-intel 00:11:36.092 [job0] 00:11:36.092 filename=/dev/nvme0n1 00:11:36.092 Could not set queue depth (nvme0n1) 00:11:36.092 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:36.092 fio-3.35 00:11:36.092 Starting 1 thread 00:11:37.029 01:24:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:37.289 01:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:37.548 01:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:37.548 01:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:37.548 01:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:37.548 01:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:37.548 01:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:37.548 01:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:37.548 01:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:37.548 01:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:37.548 01:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:37.548 01:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:37.548 01:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:37.548 01:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:37.548 01:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:38.117 01:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:38.376 01:24:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:38.376 01:24:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:38.376 01:24:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:38.376 01:24:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:38.376 01:24:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:38.376 01:24:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:38.376 01:24:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:38.376 01:24:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:38.376 01:24:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:38.376 01:24:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:38.376 01:24:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:38.376 01:24:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:38.376 01:24:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67530 00:11:42.569 00:11:42.570 job0: (groupid=0, jobs=1): err= 0: pid=67551: Sat Sep 28 01:24:38 2024 00:11:42.570 read: IOPS=8514, BW=33.3MiB/s (34.9MB/s)(200MiB/6008msec) 00:11:42.570 slat (usec): min=4, max=28090, avg=71.91, stdev=310.56 00:11:42.570 clat (usec): min=1968, max=36161, avg=10343.83, stdev=2006.48 00:11:42.570 lat (usec): min=1989, max=36172, avg=10415.74, stdev=2011.84 00:11:42.570 clat percentiles (usec): 00:11:42.570 | 1.00th=[ 5145], 5.00th=[ 7635], 10.00th=[ 8717], 20.00th=[ 9372], 00:11:42.570 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10421], 00:11:42.570 | 70.00th=[10683], 80.00th=[11076], 90.00th=[12125], 95.00th=[14222], 00:11:42.570 | 99.00th=[16057], 99.50th=[16450], 99.90th=[32637], 99.95th=[34866], 00:11:42.570 | 99.99th=[35390] 00:11:42.570 bw ( KiB/s): min= 7352, max=20912, per=50.43%, avg=17178.25, stdev=4381.21, samples=12 00:11:42.570 iops : min= 1838, max= 5228, avg=4294.50, stdev=1095.41, samples=12 00:11:42.570 write: IOPS=4892, BW=19.1MiB/s (20.0MB/s)(101MiB/5284msec); 0 zone resets 00:11:42.570 slat (usec): min=14, max=3421, avg=78.25, stdev=206.49 00:11:42.570 clat (usec): min=1717, max=35299, avg=9012.68, stdev=2034.63 00:11:42.570 lat (usec): min=1752, max=35326, avg=9090.93, stdev=2039.93 00:11:42.570 clat percentiles (usec): 00:11:42.570 | 1.00th=[ 3752], 5.00th=[ 5145], 10.00th=[ 6587], 20.00th=[ 8291], 00:11:42.570 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9503], 00:11:42.570 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10421], 95.00th=[10945], 00:11:42.570 | 99.00th=[14353], 99.50th=[15139], 99.90th=[33162], 99.95th=[34341], 00:11:42.570 | 99.99th=[35390] 00:11:42.570 bw ( KiB/s): min= 7784, max=20560, per=87.84%, avg=17188.83, stdev=4136.37, samples=12 00:11:42.570 iops : min= 1946, max= 5140, avg=4297.17, stdev=1034.17, samples=12 00:11:42.570 lat (msec) : 2=0.01%, 4=0.60%, 10=54.17%, 20=45.05%, 50=0.16% 00:11:42.570 cpu : usr=4.74%, sys=18.46%, ctx=4425, majf=0, minf=90 00:11:42.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:42.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:42.570 issued rwts: total=51158,25850,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.570 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:42.570 00:11:42.570 Run status group 0 (all jobs): 00:11:42.570 READ: bw=33.3MiB/s (34.9MB/s), 33.3MiB/s-33.3MiB/s (34.9MB/s-34.9MB/s), io=200MiB (210MB), run=6008-6008msec 00:11:42.570 WRITE: bw=19.1MiB/s (20.0MB/s), 19.1MiB/s-19.1MiB/s (20.0MB/s-20.0MB/s), io=101MiB (106MB), run=5284-5284msec 00:11:42.570 00:11:42.570 Disk stats (read/write): 00:11:42.570 nvme0n1: ios=50387/25325, merge=0/0, ticks=502340/215150, in_queue=717490, util=98.53% 00:11:42.570 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:42.570 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:11:42.828 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:42.828 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:42.828 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:42.828 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:42.828 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:42.828 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:42.828 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:42.828 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:42.828 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:42.828 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:42.828 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:42.828 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:42.828 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:42.828 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:42.828 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67628 00:11:42.828 01:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:43.087 [global] 00:11:43.087 thread=1 00:11:43.087 invalidate=1 00:11:43.087 rw=randrw 00:11:43.087 time_based=1 00:11:43.087 runtime=6 00:11:43.087 ioengine=libaio 00:11:43.087 direct=1 00:11:43.087 bs=4096 00:11:43.087 iodepth=128 00:11:43.087 norandommap=0 00:11:43.087 numjobs=1 00:11:43.087 00:11:43.087 verify_dump=1 00:11:43.087 verify_backlog=512 00:11:43.087 verify_state_save=0 00:11:43.087 do_verify=1 00:11:43.087 verify=crc32c-intel 00:11:43.087 [job0] 00:11:43.087 filename=/dev/nvme0n1 00:11:43.087 Could not set queue depth (nvme0n1) 00:11:43.087 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:43.087 fio-3.35 00:11:43.087 Starting 1 thread 00:11:44.053 01:24:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:44.311 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:44.569 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:44.569 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:44.569 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:44.569 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:44.569 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:44.569 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:44.569 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:44.569 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:44.569 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:44.569 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:44.569 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:44.569 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:44.569 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:44.827 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:45.087 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:45.087 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:45.087 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:45.087 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:45.087 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:45.087 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:45.087 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:45.087 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:45.087 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:45.087 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:45.087 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:45.087 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:45.087 01:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67628 00:11:49.283 00:11:49.283 job0: (groupid=0, jobs=1): err= 0: pid=67655: Sat Sep 28 01:24:45 2024 00:11:49.283 read: IOPS=9934, BW=38.8MiB/s (40.7MB/s)(233MiB/6009msec) 00:11:49.283 slat (usec): min=7, max=7087, avg=50.38, stdev=222.36 00:11:49.283 clat (usec): min=459, max=18007, avg=8866.16, stdev=2123.70 00:11:49.283 lat (usec): min=479, max=18018, avg=8916.55, stdev=2142.26 00:11:49.283 clat percentiles (usec): 00:11:49.283 | 1.00th=[ 4080], 5.00th=[ 5342], 10.00th=[ 5997], 20.00th=[ 6849], 00:11:49.283 | 30.00th=[ 7963], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9503], 00:11:49.283 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10945], 95.00th=[11994], 00:11:49.283 | 99.00th=[14877], 99.50th=[15401], 99.90th=[16057], 99.95th=[16319], 00:11:49.283 | 99.99th=[16712] 00:11:49.283 bw ( KiB/s): min= 1216, max=30912, per=51.96%, avg=20646.00, stdev=7489.43, samples=12 00:11:49.283 iops : min= 304, max= 7728, avg=5161.50, stdev=1872.36, samples=12 00:11:49.283 write: IOPS=5960, BW=23.3MiB/s (24.4MB/s)(121MiB/5209msec); 0 zone resets 00:11:49.283 slat (usec): min=14, max=3349, avg=61.59, stdev=169.95 00:11:49.283 clat (usec): min=714, max=16762, avg=7529.74, stdev=2112.11 00:11:49.283 lat (usec): min=770, max=16786, avg=7591.33, stdev=2131.79 00:11:49.283 clat percentiles (usec): 00:11:49.283 | 1.00th=[ 3064], 5.00th=[ 3884], 10.00th=[ 4424], 20.00th=[ 5211], 00:11:49.283 | 30.00th=[ 6128], 40.00th=[ 7701], 50.00th=[ 8225], 60.00th=[ 8586], 00:11:49.283 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[ 9765], 95.00th=[10159], 00:11:49.283 | 99.00th=[11731], 99.50th=[13173], 99.90th=[14877], 99.95th=[15401], 00:11:49.283 | 99.99th=[16581] 00:11:49.283 bw ( KiB/s): min= 1192, max=31840, per=86.64%, avg=20656.00, stdev=7570.94, samples=12 00:11:49.283 iops : min= 298, max= 7960, avg=5164.00, stdev=1892.73, samples=12 00:11:49.283 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:49.283 lat (msec) : 2=0.11%, 4=2.49%, 10=78.04%, 20=19.34% 00:11:49.283 cpu : usr=5.14%, sys=20.19%, ctx=5107, majf=0, minf=139 00:11:49.283 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:49.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.283 issued rwts: total=59694,31047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.283 00:11:49.283 Run status group 0 (all jobs): 00:11:49.283 READ: bw=38.8MiB/s (40.7MB/s), 38.8MiB/s-38.8MiB/s (40.7MB/s-40.7MB/s), io=233MiB (245MB), run=6009-6009msec 00:11:49.283 WRITE: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=121MiB (127MB), run=5209-5209msec 00:11:49.283 00:11:49.283 Disk stats (read/write): 00:11:49.283 nvme0n1: ios=59106/30296, merge=0/0, ticks=502414/214340, in_queue=716754, util=98.66% 00:11:49.283 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:49.283 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.283 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:11:49.283 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:49.283 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.283 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.283 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:49.283 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:11:49.283 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.543 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:49.543 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:49.543 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:49.543 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:49.543 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:49.543 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:49.803 rmmod nvme_tcp 00:11:49.803 rmmod nvme_fabrics 00:11:49.803 rmmod nvme_keyring 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n 67439 ']' 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # killprocess 67439 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 67439 ']' 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 67439 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67439 00:11:49.803 killing process with pid 67439 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67439' 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 67439 00:11:49.803 01:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 67439 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.182 01:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.182 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:11:51.182 00:11:51.182 real 0m20.866s 00:11:51.182 user 1m15.657s 00:11:51.182 sys 0m9.540s 00:11:51.182 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.182 ************************************ 00:11:51.182 END TEST nvmf_target_multipath 00:11:51.182 ************************************ 00:11:51.182 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:51.182 01:24:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:51.182 01:24:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:51.182 01:24:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:51.182 01:24:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:51.182 ************************************ 00:11:51.182 START TEST nvmf_zcopy 00:11:51.182 ************************************ 00:11:51.182 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:51.442 * Looking for test storage... 00:11:51.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:51.442 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:51.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.443 --rc genhtml_branch_coverage=1 00:11:51.443 --rc genhtml_function_coverage=1 00:11:51.443 --rc genhtml_legend=1 00:11:51.443 --rc geninfo_all_blocks=1 00:11:51.443 --rc geninfo_unexecuted_blocks=1 00:11:51.443 00:11:51.443 ' 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:51.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.443 --rc genhtml_branch_coverage=1 00:11:51.443 --rc genhtml_function_coverage=1 00:11:51.443 --rc genhtml_legend=1 00:11:51.443 --rc geninfo_all_blocks=1 00:11:51.443 --rc geninfo_unexecuted_blocks=1 00:11:51.443 00:11:51.443 ' 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:51.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.443 --rc genhtml_branch_coverage=1 00:11:51.443 --rc genhtml_function_coverage=1 00:11:51.443 --rc genhtml_legend=1 00:11:51.443 --rc geninfo_all_blocks=1 00:11:51.443 --rc geninfo_unexecuted_blocks=1 00:11:51.443 00:11:51.443 ' 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:51.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.443 --rc genhtml_branch_coverage=1 00:11:51.443 --rc genhtml_function_coverage=1 00:11:51.443 --rc genhtml_legend=1 00:11:51.443 --rc geninfo_all_blocks=1 00:11:51.443 --rc geninfo_unexecuted_blocks=1 00:11:51.443 00:11:51.443 ' 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:51.443 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:51.443 Cannot find device "nvmf_init_br" 00:11:51.443 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:51.444 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:51.444 Cannot find device "nvmf_init_br2" 00:11:51.444 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:51.444 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:51.444 Cannot find device "nvmf_tgt_br" 00:11:51.444 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:11:51.444 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:51.444 Cannot find device "nvmf_tgt_br2" 00:11:51.444 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:11:51.444 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:51.444 Cannot find device "nvmf_init_br" 00:11:51.444 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:11:51.444 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:51.444 Cannot find device "nvmf_init_br2" 00:11:51.444 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:11:51.444 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:51.444 Cannot find device "nvmf_tgt_br" 00:11:51.444 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:11:51.444 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:51.444 Cannot find device "nvmf_tgt_br2" 00:11:51.703 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:11:51.703 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:51.703 Cannot find device "nvmf_br" 00:11:51.703 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:11:51.703 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:51.703 Cannot find device "nvmf_init_if" 00:11:51.703 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:11:51.703 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:51.703 Cannot find device "nvmf_init_if2" 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:51.704 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:51.704 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:51.704 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:51.704 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:11:51.704 00:11:51.704 --- 10.0.0.3 ping statistics --- 00:11:51.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.704 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:51.704 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:51.704 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:11:51.704 00:11:51.704 --- 10.0.0.4 ping statistics --- 00:11:51.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.704 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:51.704 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:51.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:11:51.704 00:11:51.704 --- 10.0.0.1 ping statistics --- 00:11:51.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.704 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:11:51.963 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:51.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:11:51.963 00:11:51.963 --- 10.0.0.2 ping statistics --- 00:11:51.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.963 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:51.963 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.963 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # return 0 00:11:51.963 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:51.963 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.963 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:51.963 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:51.963 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.963 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:51.963 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:51.963 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:51.963 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:51.963 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:51.963 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:51.964 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=67966 00:11:51.964 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:51.964 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 67966 00:11:51.964 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 67966 ']' 00:11:51.964 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.964 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:51.964 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.964 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:51.964 01:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:51.964 [2024-09-28 01:24:47.795586] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:51.964 [2024-09-28 01:24:47.795755] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.223 [2024-09-28 01:24:47.968780] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.483 [2024-09-28 01:24:48.200261] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.483 [2024-09-28 01:24:48.200340] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.483 [2024-09-28 01:24:48.200375] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.483 [2024-09-28 01:24:48.200390] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.483 [2024-09-28 01:24:48.200402] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.483 [2024-09-28 01:24:48.200439] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.483 [2024-09-28 01:24:48.347960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:53.052 [2024-09-28 01:24:48.840179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:53.052 [2024-09-28 01:24:48.856256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:53.052 malloc0 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:53.052 { 00:11:53.052 "params": { 00:11:53.052 "name": "Nvme$subsystem", 00:11:53.052 "trtype": "$TEST_TRANSPORT", 00:11:53.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:53.052 "adrfam": "ipv4", 00:11:53.052 "trsvcid": "$NVMF_PORT", 00:11:53.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:53.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:53.052 "hdgst": ${hdgst:-false}, 00:11:53.052 "ddgst": ${ddgst:-false} 00:11:53.052 }, 00:11:53.052 "method": "bdev_nvme_attach_controller" 00:11:53.052 } 00:11:53.052 EOF 00:11:53.052 )") 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:11:53.052 01:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:53.052 "params": { 00:11:53.052 "name": "Nvme1", 00:11:53.052 "trtype": "tcp", 00:11:53.052 "traddr": "10.0.0.3", 00:11:53.052 "adrfam": "ipv4", 00:11:53.052 "trsvcid": "4420", 00:11:53.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:53.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:53.052 "hdgst": false, 00:11:53.052 "ddgst": false 00:11:53.052 }, 00:11:53.052 "method": "bdev_nvme_attach_controller" 00:11:53.052 }' 00:11:53.312 [2024-09-28 01:24:49.041132] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:53.312 [2024-09-28 01:24:49.041299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67999 ] 00:11:53.312 [2024-09-28 01:24:49.219678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.572 [2024-09-28 01:24:49.442510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.831 [2024-09-28 01:24:49.607267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:54.090 Running I/O for 10 seconds... 00:12:03.930 5270.00 IOPS, 41.17 MiB/s 5297.50 IOPS, 41.39 MiB/s 5284.67 IOPS, 41.29 MiB/s 5303.25 IOPS, 41.43 MiB/s 5323.00 IOPS, 41.59 MiB/s 5328.33 IOPS, 41.63 MiB/s 5342.71 IOPS, 41.74 MiB/s 5302.75 IOPS, 41.43 MiB/s 5283.33 IOPS, 41.28 MiB/s 5270.80 IOPS, 41.18 MiB/s 00:12:03.930 Latency(us) 00:12:03.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.930 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:03.930 Verification LBA range: start 0x0 length 0x1000 00:12:03.930 Nvme1n1 : 10.01 5272.05 41.19 0.00 0.00 24211.41 595.78 31695.59 00:12:03.930 =================================================================================================================== 00:12:03.930 Total : 5272.05 41.19 0.00 0.00 24211.41 595.78 31695.59 00:12:05.309 01:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=68134 00:12:05.309 01:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:05.309 01:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:05.309 01:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:05.309 01:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:05.309 01:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:12:05.309 01:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:12:05.309 01:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:12:05.309 01:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:12:05.309 { 00:12:05.309 "params": { 00:12:05.309 "name": "Nvme$subsystem", 00:12:05.309 "trtype": "$TEST_TRANSPORT", 00:12:05.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:05.309 "adrfam": "ipv4", 00:12:05.309 "trsvcid": "$NVMF_PORT", 00:12:05.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:05.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:05.309 "hdgst": ${hdgst:-false}, 00:12:05.309 "ddgst": ${ddgst:-false} 00:12:05.309 }, 00:12:05.309 "method": "bdev_nvme_attach_controller" 00:12:05.309 } 00:12:05.309 EOF 00:12:05.309 )") 00:12:05.309 01:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:12:05.309 01:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:12:05.309 [2024-09-28 01:25:00.866882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:00.866938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 01:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:12:05.309 01:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:12:05.309 "params": { 00:12:05.309 "name": "Nvme1", 00:12:05.309 "trtype": "tcp", 00:12:05.309 "traddr": "10.0.0.3", 00:12:05.309 "adrfam": "ipv4", 00:12:05.309 "trsvcid": "4420", 00:12:05.309 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.309 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:05.309 "hdgst": false, 00:12:05.309 "ddgst": false 00:12:05.309 }, 00:12:05.309 "method": "bdev_nvme_attach_controller" 00:12:05.309 }' 00:12:05.309 [2024-09-28 01:25:00.878804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:00.878926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:00.890755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:00.890812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:00.902794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:00.902917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:00.914834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:00.914896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:00.926812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:00.926905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:00.938851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:00.938905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:00.950827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:00.950912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:00.962798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:00.962868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:00.971768] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:05.309 [2024-09-28 01:25:00.971916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68134 ] 00:12:05.309 [2024-09-28 01:25:00.974874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:00.974931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:00.986825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:00.986880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:00.998839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:00.998897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.010845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.010900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.022867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.022928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.034910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.034957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.046875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.046952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.058861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.058916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.070870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.070927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.082854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.082907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.094886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.094962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.106888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.106943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.118889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.118947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.130902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.130957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.142915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.142974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.145659] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.309 [2024-09-28 01:25:01.154973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.155060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.166976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.167057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.178958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.179047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.191027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.191098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.202979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.203040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.214952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.215031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.227047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.227092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.309 [2024-09-28 01:25:01.239021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.309 [2024-09-28 01:25:01.239078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.251060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.251125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.263142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.263213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.275152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.275216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.287178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.287251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.299132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.299195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.311105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.311179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.323148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.323207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.335143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.335223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.336255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.569 [2024-09-28 01:25:01.343117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.343177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.355165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.355231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.367059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.367101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.379091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.379135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.391086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.391135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.403095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.403139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.415171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.415230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.427150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.427223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.439094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.439135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.451115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.451160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.463096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.463136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.475115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.475177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.487144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.487186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.569 [2024-09-28 01:25:01.499116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.569 [2024-09-28 01:25:01.499160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.828 [2024-09-28 01:25:01.511193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.828 [2024-09-28 01:25:01.511237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.828 [2024-09-28 01:25:01.523124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.828 [2024-09-28 01:25:01.523168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 [2024-09-28 01:25:01.526429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:05.829 [2024-09-28 01:25:01.535173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.535229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 [2024-09-28 01:25:01.547184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.547235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 [2024-09-28 01:25:01.559135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.559174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 [2024-09-28 01:25:01.571167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.571212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 [2024-09-28 01:25:01.583161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.583202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 [2024-09-28 01:25:01.595140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.595183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 [2024-09-28 01:25:01.607161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.607200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 [2024-09-28 01:25:01.619161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.619205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 [2024-09-28 01:25:01.631168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.631208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 [2024-09-28 01:25:01.643235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.643312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 [2024-09-28 01:25:01.655244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.655322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 [2024-09-28 01:25:01.667246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.667304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 [2024-09-28 01:25:01.679255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.679330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 [2024-09-28 01:25:01.691351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.691433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 [2024-09-28 01:25:01.703416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.703484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 Running I/O for 5 seconds... 00:12:05.829 [2024-09-28 01:25:01.715429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.715492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 [2024-09-28 01:25:01.732355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.732414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.829 [2024-09-28 01:25:01.746493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.829 [2024-09-28 01:25:01.746548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.088 [2024-09-28 01:25:01.763209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.088 [2024-09-28 01:25:01.763258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.088 [2024-09-28 01:25:01.778677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.088 [2024-09-28 01:25:01.778723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.088 [2024-09-28 01:25:01.795845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.088 [2024-09-28 01:25:01.795893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.088 [2024-09-28 01:25:01.813437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.088 [2024-09-28 01:25:01.813522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.088 [2024-09-28 01:25:01.829722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.088 [2024-09-28 01:25:01.829765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.088 [2024-09-28 01:25:01.840675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.088 [2024-09-28 01:25:01.840721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.088 [2024-09-28 01:25:01.856656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.088 [2024-09-28 01:25:01.856713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.088 [2024-09-28 01:25:01.872835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.088 [2024-09-28 01:25:01.872898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.088 [2024-09-28 01:25:01.889920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.088 [2024-09-28 01:25:01.889979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.088 [2024-09-28 01:25:01.908017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.088 [2024-09-28 01:25:01.908080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.088 [2024-09-28 01:25:01.924114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.088 [2024-09-28 01:25:01.924187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.088 [2024-09-28 01:25:01.935433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.088 [2024-09-28 01:25:01.935500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.089 [2024-09-28 01:25:01.951601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.089 [2024-09-28 01:25:01.951644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.089 [2024-09-28 01:25:01.967450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.089 [2024-09-28 01:25:01.967541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.089 [2024-09-28 01:25:01.984490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.089 [2024-09-28 01:25:01.984540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.089 [2024-09-28 01:25:01.999706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.089 [2024-09-28 01:25:01.999753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.089 [2024-09-28 01:25:02.016230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.089 [2024-09-28 01:25:02.016305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.380 [2024-09-28 01:25:02.032900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.380 [2024-09-28 01:25:02.032976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.380 [2024-09-28 01:25:02.044234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.380 [2024-09-28 01:25:02.044289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.380 [2024-09-28 01:25:02.060588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.380 [2024-09-28 01:25:02.060635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.380 [2024-09-28 01:25:02.076189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.380 [2024-09-28 01:25:02.076245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.380 [2024-09-28 01:25:02.092677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.380 [2024-09-28 01:25:02.092739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.380 [2024-09-28 01:25:02.104418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.380 [2024-09-28 01:25:02.104485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.380 [2024-09-28 01:25:02.120435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.380 [2024-09-28 01:25:02.120520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.380 [2024-09-28 01:25:02.135825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.380 [2024-09-28 01:25:02.135879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.380 [2024-09-28 01:25:02.147889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.380 [2024-09-28 01:25:02.147964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.380 [2024-09-28 01:25:02.165125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.380 [2024-09-28 01:25:02.165214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.380 [2024-09-28 01:25:02.182438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.380 [2024-09-28 01:25:02.182528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.380 [2024-09-28 01:25:02.198944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.380 [2024-09-28 01:25:02.199029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.380 [2024-09-28 01:25:02.215868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.380 [2024-09-28 01:25:02.215961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.380 [2024-09-28 01:25:02.232990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.380 [2024-09-28 01:25:02.233047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.380 [2024-09-28 01:25:02.248007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.380 [2024-09-28 01:25:02.248068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.380 [2024-09-28 01:25:02.263577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.380 [2024-09-28 01:25:02.263617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.380 [2024-09-28 01:25:02.279361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.380 [2024-09-28 01:25:02.279423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.639 [2024-09-28 01:25:02.295696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.639 [2024-09-28 01:25:02.295776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.639 [2024-09-28 01:25:02.311943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.639 [2024-09-28 01:25:02.311994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.639 [2024-09-28 01:25:02.329412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.639 [2024-09-28 01:25:02.329480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.639 [2024-09-28 01:25:02.345787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.639 [2024-09-28 01:25:02.345862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.639 [2024-09-28 01:25:02.363693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.640 [2024-09-28 01:25:02.363750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.640 [2024-09-28 01:25:02.379985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.640 [2024-09-28 01:25:02.380065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.640 [2024-09-28 01:25:02.396153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.640 [2024-09-28 01:25:02.396217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.640 [2024-09-28 01:25:02.408884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.640 [2024-09-28 01:25:02.408962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.640 [2024-09-28 01:25:02.425050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.640 [2024-09-28 01:25:02.425090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.640 [2024-09-28 01:25:02.439735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.640 [2024-09-28 01:25:02.439795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.640 [2024-09-28 01:25:02.456800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.640 [2024-09-28 01:25:02.456856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.640 [2024-09-28 01:25:02.471895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.640 [2024-09-28 01:25:02.471956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.640 [2024-09-28 01:25:02.483950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.640 [2024-09-28 01:25:02.484005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.640 [2024-09-28 01:25:02.502743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.640 [2024-09-28 01:25:02.502818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.640 [2024-09-28 01:25:02.518710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.640 [2024-09-28 01:25:02.518751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.640 [2024-09-28 01:25:02.530691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.640 [2024-09-28 01:25:02.530740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.640 [2024-09-28 01:25:02.548533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.640 [2024-09-28 01:25:02.548620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.640 [2024-09-28 01:25:02.564701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.640 [2024-09-28 01:25:02.564746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.899 [2024-09-28 01:25:02.578443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.899 [2024-09-28 01:25:02.578514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.899 [2024-09-28 01:25:02.594669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.899 [2024-09-28 01:25:02.594736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.899 [2024-09-28 01:25:02.609709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.899 [2024-09-28 01:25:02.609751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.899 [2024-09-28 01:25:02.625827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.899 [2024-09-28 01:25:02.625907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.899 [2024-09-28 01:25:02.642562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.899 [2024-09-28 01:25:02.642604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.899 [2024-09-28 01:25:02.657711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.899 [2024-09-28 01:25:02.657772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.900 [2024-09-28 01:25:02.673884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.900 [2024-09-28 01:25:02.673940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.900 [2024-09-28 01:25:02.690406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.900 [2024-09-28 01:25:02.690518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.900 [2024-09-28 01:25:02.708241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.900 [2024-09-28 01:25:02.708297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.900 9349.00 IOPS, 73.04 MiB/s [2024-09-28 01:25:02.723863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.900 [2024-09-28 01:25:02.723961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.900 [2024-09-28 01:25:02.734701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.900 [2024-09-28 01:25:02.734743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.900 [2024-09-28 01:25:02.750189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.900 [2024-09-28 01:25:02.750265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.900 [2024-09-28 01:25:02.765931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.900 [2024-09-28 01:25:02.765987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.900 [2024-09-28 01:25:02.782598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.900 [2024-09-28 01:25:02.782665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.900 [2024-09-28 01:25:02.798743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.900 [2024-09-28 01:25:02.798801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.900 [2024-09-28 01:25:02.815779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.900 [2024-09-28 01:25:02.815877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.900 [2024-09-28 01:25:02.830694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.900 [2024-09-28 01:25:02.830737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.160 [2024-09-28 01:25:02.847427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.160 [2024-09-28 01:25:02.847509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.160 [2024-09-28 01:25:02.864846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.160 [2024-09-28 01:25:02.864903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.160 [2024-09-28 01:25:02.881025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.160 [2024-09-28 01:25:02.881137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.160 [2024-09-28 01:25:02.896804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.160 [2024-09-28 01:25:02.896862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.160 [2024-09-28 01:25:02.913240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.160 [2024-09-28 01:25:02.913300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.160 [2024-09-28 01:25:02.930606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.160 [2024-09-28 01:25:02.930649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.160 [2024-09-28 01:25:02.947482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.160 [2024-09-28 01:25:02.947622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.160 [2024-09-28 01:25:02.964060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.160 [2024-09-28 01:25:02.964117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.160 [2024-09-28 01:25:02.980385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.160 [2024-09-28 01:25:02.980474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.160 [2024-09-28 01:25:02.997512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.160 [2024-09-28 01:25:02.997552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.160 [2024-09-28 01:25:03.013552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.160 [2024-09-28 01:25:03.013627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.160 [2024-09-28 01:25:03.030908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.160 [2024-09-28 01:25:03.030965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.160 [2024-09-28 01:25:03.046687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.160 [2024-09-28 01:25:03.046733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.160 [2024-09-28 01:25:03.057375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.160 [2024-09-28 01:25:03.057431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.160 [2024-09-28 01:25:03.071322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.160 [2024-09-28 01:25:03.071430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.160 [2024-09-28 01:25:03.086615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.160 [2024-09-28 01:25:03.086672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.419 [2024-09-28 01:25:03.101868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.419 [2024-09-28 01:25:03.101925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.419 [2024-09-28 01:25:03.118118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.419 [2024-09-28 01:25:03.118210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.419 [2024-09-28 01:25:03.134747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.419 [2024-09-28 01:25:03.134796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.419 [2024-09-28 01:25:03.151458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.419 [2024-09-28 01:25:03.151525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.419 [2024-09-28 01:25:03.162101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.419 [2024-09-28 01:25:03.162158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.419 [2024-09-28 01:25:03.178903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.419 [2024-09-28 01:25:03.178959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.419 [2024-09-28 01:25:03.194026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.419 [2024-09-28 01:25:03.194082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.419 [2024-09-28 01:25:03.205445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.419 [2024-09-28 01:25:03.205519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.419 [2024-09-28 01:25:03.221402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.419 [2024-09-28 01:25:03.221497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.419 [2024-09-28 01:25:03.236722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.419 [2024-09-28 01:25:03.236765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.419 [2024-09-28 01:25:03.252941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.419 [2024-09-28 01:25:03.252997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.419 [2024-09-28 01:25:03.270231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.419 [2024-09-28 01:25:03.270288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.419 [2024-09-28 01:25:03.286065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.419 [2024-09-28 01:25:03.286122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.419 [2024-09-28 01:25:03.301469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.419 [2024-09-28 01:25:03.301522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.419 [2024-09-28 01:25:03.317199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.419 [2024-09-28 01:25:03.317257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.419 [2024-09-28 01:25:03.327786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.420 [2024-09-28 01:25:03.327855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.420 [2024-09-28 01:25:03.344093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.420 [2024-09-28 01:25:03.344150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.679 [2024-09-28 01:25:03.359938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.679 [2024-09-28 01:25:03.359995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.679 [2024-09-28 01:25:03.370836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.679 [2024-09-28 01:25:03.370892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.679 [2024-09-28 01:25:03.387199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.679 [2024-09-28 01:25:03.387261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.679 [2024-09-28 01:25:03.402254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.679 [2024-09-28 01:25:03.402314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.679 [2024-09-28 01:25:03.415461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.679 [2024-09-28 01:25:03.415506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.679 [2024-09-28 01:25:03.435417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.679 [2024-09-28 01:25:03.435509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.679 [2024-09-28 01:25:03.452831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.679 [2024-09-28 01:25:03.452934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.679 [2024-09-28 01:25:03.468093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.679 [2024-09-28 01:25:03.468150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.679 [2024-09-28 01:25:03.483665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.679 [2024-09-28 01:25:03.483707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.680 [2024-09-28 01:25:03.499855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.680 [2024-09-28 01:25:03.499913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.680 [2024-09-28 01:25:03.511440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.680 [2024-09-28 01:25:03.511513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.680 [2024-09-28 01:25:03.527676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.680 [2024-09-28 01:25:03.527732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.680 [2024-09-28 01:25:03.543495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.680 [2024-09-28 01:25:03.543549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.680 [2024-09-28 01:25:03.560963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.680 [2024-09-28 01:25:03.561039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.680 [2024-09-28 01:25:03.577110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.680 [2024-09-28 01:25:03.577167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.680 [2024-09-28 01:25:03.595488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.680 [2024-09-28 01:25:03.595555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.680 [2024-09-28 01:25:03.607744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.680 [2024-09-28 01:25:03.607799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.940 [2024-09-28 01:25:03.626311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.940 [2024-09-28 01:25:03.626367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.940 [2024-09-28 01:25:03.642703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.940 [2024-09-28 01:25:03.642755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.940 [2024-09-28 01:25:03.658027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.940 [2024-09-28 01:25:03.658086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.940 [2024-09-28 01:25:03.674713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.940 [2024-09-28 01:25:03.674771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.940 [2024-09-28 01:25:03.689786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.940 [2024-09-28 01:25:03.689860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.940 [2024-09-28 01:25:03.705142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.940 [2024-09-28 01:25:03.705201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.940 [2024-09-28 01:25:03.716022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.940 [2024-09-28 01:25:03.716079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.940 9608.00 IOPS, 75.06 MiB/s [2024-09-28 01:25:03.732346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.940 [2024-09-28 01:25:03.732410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.940 [2024-09-28 01:25:03.747270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.940 [2024-09-28 01:25:03.747344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.940 [2024-09-28 01:25:03.763716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.940 [2024-09-28 01:25:03.763758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.940 [2024-09-28 01:25:03.779893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.940 [2024-09-28 01:25:03.779962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.940 [2024-09-28 01:25:03.791705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.940 [2024-09-28 01:25:03.791745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.940 [2024-09-28 01:25:03.806937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.940 [2024-09-28 01:25:03.807017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.940 [2024-09-28 01:25:03.822318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.940 [2024-09-28 01:25:03.822375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.940 [2024-09-28 01:25:03.833288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.940 [2024-09-28 01:25:03.833345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.940 [2024-09-28 01:25:03.849488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.940 [2024-09-28 01:25:03.849539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.940 [2024-09-28 01:25:03.865109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.940 [2024-09-28 01:25:03.865167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.200 [2024-09-28 01:25:03.881986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.200 [2024-09-28 01:25:03.882046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.200 [2024-09-28 01:25:03.898742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.200 [2024-09-28 01:25:03.898801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.200 [2024-09-28 01:25:03.916237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.200 [2024-09-28 01:25:03.916297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.200 [2024-09-28 01:25:03.932526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.200 [2024-09-28 01:25:03.932585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.200 [2024-09-28 01:25:03.949961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.200 [2024-09-28 01:25:03.950021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.200 [2024-09-28 01:25:03.965686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.200 [2024-09-28 01:25:03.965730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.200 [2024-09-28 01:25:03.981431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.200 [2024-09-28 01:25:03.981516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.200 [2024-09-28 01:25:03.993292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.200 [2024-09-28 01:25:03.993351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.200 [2024-09-28 01:25:04.009292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.200 [2024-09-28 01:25:04.009348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.200 [2024-09-28 01:25:04.023818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.200 [2024-09-28 01:25:04.023888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.200 [2024-09-28 01:25:04.039677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.200 [2024-09-28 01:25:04.039720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.200 [2024-09-28 01:25:04.050091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.200 [2024-09-28 01:25:04.050147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.200 [2024-09-28 01:25:04.066570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.200 [2024-09-28 01:25:04.066656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.200 [2024-09-28 01:25:04.081430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.200 [2024-09-28 01:25:04.081497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.200 [2024-09-28 01:25:04.097249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.201 [2024-09-28 01:25:04.097306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.201 [2024-09-28 01:25:04.115283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.201 [2024-09-28 01:25:04.115354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.460 [2024-09-28 01:25:04.132722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.460 [2024-09-28 01:25:04.132801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.460 [2024-09-28 01:25:04.149140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.460 [2024-09-28 01:25:04.149197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.460 [2024-09-28 01:25:04.160196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.460 [2024-09-28 01:25:04.160253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.460 [2024-09-28 01:25:04.176618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.460 [2024-09-28 01:25:04.176662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.460 [2024-09-28 01:25:04.192565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.460 [2024-09-28 01:25:04.192635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.460 [2024-09-28 01:25:04.208858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.460 [2024-09-28 01:25:04.208933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.460 [2024-09-28 01:25:04.226200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.460 [2024-09-28 01:25:04.226269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.460 [2024-09-28 01:25:04.242501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.460 [2024-09-28 01:25:04.242542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.460 [2024-09-28 01:25:04.258521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.460 [2024-09-28 01:25:04.258578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.460 [2024-09-28 01:25:04.276389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.460 [2024-09-28 01:25:04.276472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.460 [2024-09-28 01:25:04.291195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.460 [2024-09-28 01:25:04.291241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.460 [2024-09-28 01:25:04.307714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.460 [2024-09-28 01:25:04.307781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.460 [2024-09-28 01:25:04.325632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.460 [2024-09-28 01:25:04.325691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.460 [2024-09-28 01:25:04.340749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.460 [2024-09-28 01:25:04.340807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.460 [2024-09-28 01:25:04.356437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.460 [2024-09-28 01:25:04.356523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.460 [2024-09-28 01:25:04.368038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.460 [2024-09-28 01:25:04.368115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.460 [2024-09-28 01:25:04.382694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.460 [2024-09-28 01:25:04.382736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.720 [2024-09-28 01:25:04.398168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.720 [2024-09-28 01:25:04.398226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.720 [2024-09-28 01:25:04.417812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.720 [2024-09-28 01:25:04.417886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.720 [2024-09-28 01:25:04.433970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.720 [2024-09-28 01:25:04.434031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.720 [2024-09-28 01:25:04.447347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.720 [2024-09-28 01:25:04.447426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.720 [2024-09-28 01:25:04.465406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.720 [2024-09-28 01:25:04.465504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.720 [2024-09-28 01:25:04.481425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.720 [2024-09-28 01:25:04.481554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.720 [2024-09-28 01:25:04.492526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.720 [2024-09-28 01:25:04.492566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.720 [2024-09-28 01:25:04.508242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.720 [2024-09-28 01:25:04.508299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.720 [2024-09-28 01:25:04.524442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.720 [2024-09-28 01:25:04.524508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.720 [2024-09-28 01:25:04.541171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.720 [2024-09-28 01:25:04.541261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.720 [2024-09-28 01:25:04.557749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.720 [2024-09-28 01:25:04.557824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.720 [2024-09-28 01:25:04.574030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.720 [2024-09-28 01:25:04.574088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.720 [2024-09-28 01:25:04.589727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.720 [2024-09-28 01:25:04.589770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.720 [2024-09-28 01:25:04.606241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.720 [2024-09-28 01:25:04.606297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.720 [2024-09-28 01:25:04.623101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.720 [2024-09-28 01:25:04.623147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.720 [2024-09-28 01:25:04.640586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.720 [2024-09-28 01:25:04.640691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.980 [2024-09-28 01:25:04.657599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.980 [2024-09-28 01:25:04.657643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.980 [2024-09-28 01:25:04.670224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.980 [2024-09-28 01:25:04.670264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.981 [2024-09-28 01:25:04.688553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.981 [2024-09-28 01:25:04.688623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.981 [2024-09-28 01:25:04.704353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.981 [2024-09-28 01:25:04.704409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.981 [2024-09-28 01:25:04.715865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.981 [2024-09-28 01:25:04.715920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.981 9671.33 IOPS, 75.56 MiB/s [2024-09-28 01:25:04.731562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.981 [2024-09-28 01:25:04.731600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.981 [2024-09-28 01:25:04.744240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.981 [2024-09-28 01:25:04.744295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.981 [2024-09-28 01:25:04.761905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.981 [2024-09-28 01:25:04.761960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.981 [2024-09-28 01:25:04.777663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.981 [2024-09-28 01:25:04.777731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.981 [2024-09-28 01:25:04.793869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.981 [2024-09-28 01:25:04.793939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.981 [2024-09-28 01:25:04.805735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.981 [2024-09-28 01:25:04.805775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.981 [2024-09-28 01:25:04.822264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.981 [2024-09-28 01:25:04.822319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.981 [2024-09-28 01:25:04.838764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.981 [2024-09-28 01:25:04.838809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.981 [2024-09-28 01:25:04.854237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.981 [2024-09-28 01:25:04.854294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.981 [2024-09-28 01:25:04.869372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.981 [2024-09-28 01:25:04.869429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.981 [2024-09-28 01:25:04.886668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.981 [2024-09-28 01:25:04.886726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.981 [2024-09-28 01:25:04.898882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.981 [2024-09-28 01:25:04.898939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.240 [2024-09-28 01:25:04.917514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.240 [2024-09-28 01:25:04.917568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.240 [2024-09-28 01:25:04.933049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.240 [2024-09-28 01:25:04.933107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.240 [2024-09-28 01:25:04.948776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.240 [2024-09-28 01:25:04.948864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.240 [2024-09-28 01:25:04.960314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.240 [2024-09-28 01:25:04.960387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.240 [2024-09-28 01:25:04.976221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.240 [2024-09-28 01:25:04.976278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.240 [2024-09-28 01:25:04.991863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.240 [2024-09-28 01:25:04.991920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.240 [2024-09-28 01:25:05.008874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.240 [2024-09-28 01:25:05.008932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.240 [2024-09-28 01:25:05.025173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.240 [2024-09-28 01:25:05.025230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.240 [2024-09-28 01:25:05.036577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.240 [2024-09-28 01:25:05.036621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.240 [2024-09-28 01:25:05.052698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.240 [2024-09-28 01:25:05.052740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.240 [2024-09-28 01:25:05.068367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.240 [2024-09-28 01:25:05.068424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.240 [2024-09-28 01:25:05.084387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.240 [2024-09-28 01:25:05.084454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.240 [2024-09-28 01:25:05.101694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.240 [2024-09-28 01:25:05.101737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.240 [2024-09-28 01:25:05.117858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.240 [2024-09-28 01:25:05.117916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.240 [2024-09-28 01:25:05.135042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.240 [2024-09-28 01:25:05.135088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.240 [2024-09-28 01:25:05.150476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.240 [2024-09-28 01:25:05.150530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.240 [2024-09-28 01:25:05.162393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.240 [2024-09-28 01:25:05.162460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.500 [2024-09-28 01:25:05.176376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.500 [2024-09-28 01:25:05.176450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.500 [2024-09-28 01:25:05.189676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.500 [2024-09-28 01:25:05.189718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.500 [2024-09-28 01:25:05.207411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.500 [2024-09-28 01:25:05.207487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.500 [2024-09-28 01:25:05.222499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.500 [2024-09-28 01:25:05.222561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.500 [2024-09-28 01:25:05.238331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.500 [2024-09-28 01:25:05.238386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.500 [2024-09-28 01:25:05.255520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.500 [2024-09-28 01:25:05.255583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.500 [2024-09-28 01:25:05.272972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.500 [2024-09-28 01:25:05.273047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.500 [2024-09-28 01:25:05.287957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.500 [2024-09-28 01:25:05.288017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.500 [2024-09-28 01:25:05.303917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.500 [2024-09-28 01:25:05.303974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.500 [2024-09-28 01:25:05.320086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.500 [2024-09-28 01:25:05.320146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.500 [2024-09-28 01:25:05.331656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.500 [2024-09-28 01:25:05.331698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.500 [2024-09-28 01:25:05.347658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.500 [2024-09-28 01:25:05.347700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.500 [2024-09-28 01:25:05.363856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.500 [2024-09-28 01:25:05.363918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.500 [2024-09-28 01:25:05.379671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.500 [2024-09-28 01:25:05.379714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.500 [2024-09-28 01:25:05.391594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.500 [2024-09-28 01:25:05.391635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.500 [2024-09-28 01:25:05.408091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.500 [2024-09-28 01:25:05.408163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.500 [2024-09-28 01:25:05.424453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.500 [2024-09-28 01:25:05.424519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.760 [2024-09-28 01:25:05.440602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.760 [2024-09-28 01:25:05.440646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.760 [2024-09-28 01:25:05.457605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.760 [2024-09-28 01:25:05.457651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.760 [2024-09-28 01:25:05.474902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.760 [2024-09-28 01:25:05.474952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.760 [2024-09-28 01:25:05.490505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.760 [2024-09-28 01:25:05.490651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.760 [2024-09-28 01:25:05.506761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.760 [2024-09-28 01:25:05.506883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.760 [2024-09-28 01:25:05.518326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.760 [2024-09-28 01:25:05.518470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.760 [2024-09-28 01:25:05.532825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.760 [2024-09-28 01:25:05.532987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.760 [2024-09-28 01:25:05.546451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.760 [2024-09-28 01:25:05.546607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.760 [2024-09-28 01:25:05.565070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.760 [2024-09-28 01:25:05.565201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.760 [2024-09-28 01:25:05.580863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.760 [2024-09-28 01:25:05.580997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.760 [2024-09-28 01:25:05.596330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.760 [2024-09-28 01:25:05.596446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.760 [2024-09-28 01:25:05.612434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.760 [2024-09-28 01:25:05.612559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.760 [2024-09-28 01:25:05.623814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.760 [2024-09-28 01:25:05.623953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.760 [2024-09-28 01:25:05.639955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.760 [2024-09-28 01:25:05.640108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.760 [2024-09-28 01:25:05.657214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.760 [2024-09-28 01:25:05.657272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.760 [2024-09-28 01:25:05.673212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.760 [2024-09-28 01:25:05.673317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.760 [2024-09-28 01:25:05.689595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.760 [2024-09-28 01:25:05.689687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.021 [2024-09-28 01:25:05.706833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.021 [2024-09-28 01:25:05.706881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.021 9623.75 IOPS, 75.19 MiB/s [2024-09-28 01:25:05.724178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.021 [2024-09-28 01:25:05.724235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.021 [2024-09-28 01:25:05.741481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.021 [2024-09-28 01:25:05.741549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.021 [2024-09-28 01:25:05.757997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.021 [2024-09-28 01:25:05.758055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.021 [2024-09-28 01:25:05.774868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.021 [2024-09-28 01:25:05.774927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.021 [2024-09-28 01:25:05.792418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.021 [2024-09-28 01:25:05.792485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.021 [2024-09-28 01:25:05.808273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.021 [2024-09-28 01:25:05.808329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.021 [2024-09-28 01:25:05.826194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.021 [2024-09-28 01:25:05.826251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.021 [2024-09-28 01:25:05.841965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.021 [2024-09-28 01:25:05.842022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.021 [2024-09-28 01:25:05.853959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.021 [2024-09-28 01:25:05.854017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.021 [2024-09-28 01:25:05.871131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.021 [2024-09-28 01:25:05.871180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.021 [2024-09-28 01:25:05.887652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.021 [2024-09-28 01:25:05.887694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.021 [2024-09-28 01:25:05.903022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.021 [2024-09-28 01:25:05.903084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.021 [2024-09-28 01:25:05.919517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.021 [2024-09-28 01:25:05.919604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.021 [2024-09-28 01:25:05.932502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.021 [2024-09-28 01:25:05.932556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.021 [2024-09-28 01:25:05.951476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.021 [2024-09-28 01:25:05.951528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.281 [2024-09-28 01:25:05.967627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.281 [2024-09-28 01:25:05.967671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.281 [2024-09-28 01:25:05.984919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.281 [2024-09-28 01:25:05.984981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.281 [2024-09-28 01:25:06.000335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.281 [2024-09-28 01:25:06.000391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.281 [2024-09-28 01:25:06.016254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.281 [2024-09-28 01:25:06.016296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.281 [2024-09-28 01:25:06.033089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.281 [2024-09-28 01:25:06.033132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.281 [2024-09-28 01:25:06.050322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.281 [2024-09-28 01:25:06.050363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.281 [2024-09-28 01:25:06.066334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.281 [2024-09-28 01:25:06.066375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.281 [2024-09-28 01:25:06.083296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.281 [2024-09-28 01:25:06.083564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.281 [2024-09-28 01:25:06.099245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.281 [2024-09-28 01:25:06.099306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.281 [2024-09-28 01:25:06.116372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.281 [2024-09-28 01:25:06.116414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.281 [2024-09-28 01:25:06.132342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.281 [2024-09-28 01:25:06.132384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.281 [2024-09-28 01:25:06.148928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.281 [2024-09-28 01:25:06.148970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.281 [2024-09-28 01:25:06.160443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.281 [2024-09-28 01:25:06.160528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.281 [2024-09-28 01:25:06.177569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.281 [2024-09-28 01:25:06.177611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.281 [2024-09-28 01:25:06.193559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.281 [2024-09-28 01:25:06.193782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.281 [2024-09-28 01:25:06.209170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.281 [2024-09-28 01:25:06.209394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.540 [2024-09-28 01:25:06.224506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.540 [2024-09-28 01:25:06.224560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.540 [2024-09-28 01:25:06.241667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.540 [2024-09-28 01:25:06.241708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.540 [2024-09-28 01:25:06.257444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.540 [2024-09-28 01:25:06.257545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.540 [2024-09-28 01:25:06.269248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.540 [2024-09-28 01:25:06.269289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.540 [2024-09-28 01:25:06.285876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.540 [2024-09-28 01:25:06.285917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.541 [2024-09-28 01:25:06.301257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.541 [2024-09-28 01:25:06.301297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.541 [2024-09-28 01:25:06.318705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.541 [2024-09-28 01:25:06.318746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.541 [2024-09-28 01:25:06.334847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.541 [2024-09-28 01:25:06.334902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.541 [2024-09-28 01:25:06.351424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.541 [2024-09-28 01:25:06.351490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.541 [2024-09-28 01:25:06.369066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.541 [2024-09-28 01:25:06.369290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.541 [2024-09-28 01:25:06.384142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.541 [2024-09-28 01:25:06.384359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.541 [2024-09-28 01:25:06.400238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.541 [2024-09-28 01:25:06.400278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.541 [2024-09-28 01:25:06.417419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.541 [2024-09-28 01:25:06.417501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.541 [2024-09-28 01:25:06.433129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.541 [2024-09-28 01:25:06.433187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.541 [2024-09-28 01:25:06.449753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.541 [2024-09-28 01:25:06.449811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.541 [2024-09-28 01:25:06.466871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.541 [2024-09-28 01:25:06.467085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.800 [2024-09-28 01:25:06.480699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.800 [2024-09-28 01:25:06.480747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.800 [2024-09-28 01:25:06.500133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.800 [2024-09-28 01:25:06.500233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.800 [2024-09-28 01:25:06.514243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.800 [2024-09-28 01:25:06.514444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.800 [2024-09-28 01:25:06.531141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.800 [2024-09-28 01:25:06.531186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.800 [2024-09-28 01:25:06.548189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.800 [2024-09-28 01:25:06.548391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.800 [2024-09-28 01:25:06.564716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.800 [2024-09-28 01:25:06.564757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.800 [2024-09-28 01:25:06.581318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.800 [2024-09-28 01:25:06.581358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.800 [2024-09-28 01:25:06.598281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.800 [2024-09-28 01:25:06.598505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.800 [2024-09-28 01:25:06.614426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.800 [2024-09-28 01:25:06.614511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.800 [2024-09-28 01:25:06.625236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.800 [2024-09-28 01:25:06.625436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.800 [2024-09-28 01:25:06.641040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.800 [2024-09-28 01:25:06.641084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.800 [2024-09-28 01:25:06.656509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.800 [2024-09-28 01:25:06.656561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.800 [2024-09-28 01:25:06.672724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.800 [2024-09-28 01:25:06.672764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.800 [2024-09-28 01:25:06.689918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.800 [2024-09-28 01:25:06.689960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.800 [2024-09-28 01:25:06.705686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.800 [2024-09-28 01:25:06.705745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.800 9613.60 IOPS, 75.11 MiB/s [2024-09-28 01:25:06.719091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.800 [2024-09-28 01:25:06.719258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.800 [2024-09-28 01:25:06.731593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.800 [2024-09-28 01:25:06.731767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 00:12:11.060 Latency(us) 00:12:11.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.060 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:11.060 Nvme1n1 : 5.02 9610.13 75.08 0.00 0.00 13295.68 5034.36 24427.05 00:12:11.060 =================================================================================================================== 00:12:11.060 Total : 9610.13 75.08 0.00 0.00 13295.68 5034.36 24427.05 00:12:11.060 [2024-09-28 01:25:06.743536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.743735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.755581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.755763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.767559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.767754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.779663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.779971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.791621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.791879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.803576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.803773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.815603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.815797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.827594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.827773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.839580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.839774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.851651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.851893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.863634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.863895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.875612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.875808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.887615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.887803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.899617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.899810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.911615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.911809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.923614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.923806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.935649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.935686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.947670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.947860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.959675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.959867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.971718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.971931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.060 [2024-09-28 01:25:06.983687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.060 [2024-09-28 01:25:06.983726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:06.995689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:06.995911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.007768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.007848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.019719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.019757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.031718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.031756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.043738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.043776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.055807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.055879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.067835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.067883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.079759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.079814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.091746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.091783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.103758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.103795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.115767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.115821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.127749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.127785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.139770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.139824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.151790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.151847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.163876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.163933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.175837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.175877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.187812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.187850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.199873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.199914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.211817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.211854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.223789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.223840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.235840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.235878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.320 [2024-09-28 01:25:07.247858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.320 [2024-09-28 01:25:07.247896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.579 [2024-09-28 01:25:07.259831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.579 [2024-09-28 01:25:07.260037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.579 [2024-09-28 01:25:07.271857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.579 [2024-09-28 01:25:07.271895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.579 [2024-09-28 01:25:07.283827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.579 [2024-09-28 01:25:07.283864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.579 [2024-09-28 01:25:07.295856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.579 [2024-09-28 01:25:07.295892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.579 [2024-09-28 01:25:07.307851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.579 [2024-09-28 01:25:07.307887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.579 [2024-09-28 01:25:07.319848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.579 [2024-09-28 01:25:07.319884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.579 [2024-09-28 01:25:07.331874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.579 [2024-09-28 01:25:07.331911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.579 [2024-09-28 01:25:07.343846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.579 [2024-09-28 01:25:07.343882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.579 [2024-09-28 01:25:07.355870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.579 [2024-09-28 01:25:07.355907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.579 [2024-09-28 01:25:07.367895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.579 [2024-09-28 01:25:07.367932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.579 [2024-09-28 01:25:07.379881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.579 [2024-09-28 01:25:07.379917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.579 [2024-09-28 01:25:07.391881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.579 [2024-09-28 01:25:07.391917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.579 [2024-09-28 01:25:07.403926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.579 [2024-09-28 01:25:07.403974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.579 [2024-09-28 01:25:07.415875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.579 [2024-09-28 01:25:07.415911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.579 [2024-09-28 01:25:07.427896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.580 [2024-09-28 01:25:07.427931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.580 [2024-09-28 01:25:07.439880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.580 [2024-09-28 01:25:07.439915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.580 [2024-09-28 01:25:07.451901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.580 [2024-09-28 01:25:07.451937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.580 [2024-09-28 01:25:07.463902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.580 [2024-09-28 01:25:07.463938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.580 [2024-09-28 01:25:07.475966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.580 [2024-09-28 01:25:07.476029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.580 [2024-09-28 01:25:07.487923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.580 [2024-09-28 01:25:07.487961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.580 [2024-09-28 01:25:07.499954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.580 [2024-09-28 01:25:07.499993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.511914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.511955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.523966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.524129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.535951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.535989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.547966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.548003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.559956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.559995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.571945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.571983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.583999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.584047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.595967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.596006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.607950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.607987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.620036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.620086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.631958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.631993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.643981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.644018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.655987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.656024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.667968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.668004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.679984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.680021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.691984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.692019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.704005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.704042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.716057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.716113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.728006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.728047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.740033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.740088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.752034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.752073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.839 [2024-09-28 01:25:07.764057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.839 [2024-09-28 01:25:07.764274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.098 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (68134) - No such process 00:12:12.098 01:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 68134 00:12:12.098 01:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.098 01:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.098 01:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:12.098 01:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.098 01:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:12.098 01:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.098 01:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:12.098 delay0 00:12:12.099 01:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.099 01:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:12.099 01:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.099 01:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:12.099 01:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.099 01:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:12:12.099 [2024-09-28 01:25:07.999316] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:18.666 Initializing NVMe Controllers 00:12:18.666 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:18.666 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:18.666 Initialization complete. Launching workers. 00:12:18.666 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 817 00:12:18.666 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1104, failed to submit 33 00:12:18.666 success 999, unsuccessful 105, failed 0 00:12:18.666 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:18.666 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:18.666 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:18.666 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:18.666 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:18.667 rmmod nvme_tcp 00:12:18.667 rmmod nvme_fabrics 00:12:18.667 rmmod nvme_keyring 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 67966 ']' 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 67966 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 67966 ']' 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 67966 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67966 00:12:18.667 killing process with pid 67966 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67966' 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 67966 00:12:18.667 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 67966 00:12:19.605 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:19.605 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:19.605 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:19.605 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:19.605 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:12:19.605 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:19.605 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:12:19.605 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:19.605 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:19.605 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:19.605 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:19.605 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:19.605 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:12:19.865 00:12:19.865 real 0m28.625s 00:12:19.865 user 0m47.091s 00:12:19.865 sys 0m7.098s 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.865 ************************************ 00:12:19.865 END TEST nvmf_zcopy 00:12:19.865 ************************************ 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:19.865 ************************************ 00:12:19.865 START TEST nvmf_nmic 00:12:19.865 ************************************ 00:12:19.865 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:20.125 * Looking for test storage... 00:12:20.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:20.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.125 --rc genhtml_branch_coverage=1 00:12:20.125 --rc genhtml_function_coverage=1 00:12:20.125 --rc genhtml_legend=1 00:12:20.125 --rc geninfo_all_blocks=1 00:12:20.125 --rc geninfo_unexecuted_blocks=1 00:12:20.125 00:12:20.125 ' 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:20.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.125 --rc genhtml_branch_coverage=1 00:12:20.125 --rc genhtml_function_coverage=1 00:12:20.125 --rc genhtml_legend=1 00:12:20.125 --rc geninfo_all_blocks=1 00:12:20.125 --rc geninfo_unexecuted_blocks=1 00:12:20.125 00:12:20.125 ' 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:20.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.125 --rc genhtml_branch_coverage=1 00:12:20.125 --rc genhtml_function_coverage=1 00:12:20.125 --rc genhtml_legend=1 00:12:20.125 --rc geninfo_all_blocks=1 00:12:20.125 --rc geninfo_unexecuted_blocks=1 00:12:20.125 00:12:20.125 ' 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:20.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.125 --rc genhtml_branch_coverage=1 00:12:20.125 --rc genhtml_function_coverage=1 00:12:20.125 --rc genhtml_legend=1 00:12:20.125 --rc geninfo_all_blocks=1 00:12:20.125 --rc geninfo_unexecuted_blocks=1 00:12:20.125 00:12:20.125 ' 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:20.125 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:20.125 01:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:20.125 Cannot find device "nvmf_init_br" 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:20.125 Cannot find device "nvmf_init_br2" 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:20.125 Cannot find device "nvmf_tgt_br" 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:12:20.125 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:20.385 Cannot find device "nvmf_tgt_br2" 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:20.385 Cannot find device "nvmf_init_br" 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:20.385 Cannot find device "nvmf_init_br2" 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:20.385 Cannot find device "nvmf_tgt_br" 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:20.385 Cannot find device "nvmf_tgt_br2" 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:20.385 Cannot find device "nvmf_br" 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:20.385 Cannot find device "nvmf_init_if" 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:20.385 Cannot find device "nvmf_init_if2" 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:20.385 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:20.385 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:20.385 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:20.644 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:20.644 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:12:20.644 00:12:20.644 --- 10.0.0.3 ping statistics --- 00:12:20.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.644 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:20.644 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:20.644 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:12:20.644 00:12:20.644 --- 10.0.0.4 ping statistics --- 00:12:20.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.644 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:20.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:12:20.644 00:12:20.644 --- 10.0.0.1 ping statistics --- 00:12:20.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.644 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:20.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:12:20.644 00:12:20.644 --- 10.0.0.2 ping statistics --- 00:12:20.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.644 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # return 0 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.644 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:20.645 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:20.645 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:20.645 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:20.645 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:20.645 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:20.645 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=68534 00:12:20.645 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 68534 00:12:20.645 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 68534 ']' 00:12:20.645 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.645 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.645 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:20.645 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.645 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:20.645 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:20.904 [2024-09-28 01:25:16.593360] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:20.904 [2024-09-28 01:25:16.593887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.904 [2024-09-28 01:25:16.772737] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.163 [2024-09-28 01:25:17.010318] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.163 [2024-09-28 01:25:17.010636] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.163 [2024-09-28 01:25:17.010837] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.163 [2024-09-28 01:25:17.011097] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.163 [2024-09-28 01:25:17.011250] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.163 [2024-09-28 01:25:17.011605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.163 [2024-09-28 01:25:17.011873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.163 [2024-09-28 01:25:17.011869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.163 [2024-09-28 01:25:17.011753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.422 [2024-09-28 01:25:17.208450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:21.680 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:21.680 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:12:21.680 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:21.680 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:21.680 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:21.680 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.680 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.680 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.939 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:21.940 [2024-09-28 01:25:17.616673] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:21.940 Malloc0 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:21.940 [2024-09-28 01:25:17.719379] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:21.940 test case1: single bdev can't be used in multiple subsystems 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:21.940 [2024-09-28 01:25:17.743090] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:21.940 [2024-09-28 01:25:17.743152] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:21.940 [2024-09-28 01:25:17.743176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.940 request: 00:12:21.940 { 00:12:21.940 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:21.940 "namespace": { 00:12:21.940 "bdev_name": "Malloc0", 00:12:21.940 "no_auto_visible": false 00:12:21.940 }, 00:12:21.940 "method": "nvmf_subsystem_add_ns", 00:12:21.940 "req_id": 1 00:12:21.940 } 00:12:21.940 Got JSON-RPC error response 00:12:21.940 response: 00:12:21.940 { 00:12:21.940 "code": -32602, 00:12:21.940 "message": "Invalid parameters" 00:12:21.940 } 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:21.940 Adding namespace failed - expected result. 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:21.940 test case2: host connect to nvmf target in multiple paths 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:21.940 [2024-09-28 01:25:17.755338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.940 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:22.199 01:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:12:22.199 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.199 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:12:22.199 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.199 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:22.199 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:12:24.726 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:24.726 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:24.726 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.726 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:24.726 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.726 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:12:24.726 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:24.726 [global] 00:12:24.726 thread=1 00:12:24.726 invalidate=1 00:12:24.726 rw=write 00:12:24.726 time_based=1 00:12:24.726 runtime=1 00:12:24.726 ioengine=libaio 00:12:24.726 direct=1 00:12:24.726 bs=4096 00:12:24.726 iodepth=1 00:12:24.726 norandommap=0 00:12:24.726 numjobs=1 00:12:24.726 00:12:24.726 verify_dump=1 00:12:24.726 verify_backlog=512 00:12:24.726 verify_state_save=0 00:12:24.726 do_verify=1 00:12:24.726 verify=crc32c-intel 00:12:24.726 [job0] 00:12:24.726 filename=/dev/nvme0n1 00:12:24.726 Could not set queue depth (nvme0n1) 00:12:24.726 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:24.726 fio-3.35 00:12:24.726 Starting 1 thread 00:12:25.681 00:12:25.681 job0: (groupid=0, jobs=1): err= 0: pid=68626: Sat Sep 28 01:25:21 2024 00:12:25.681 read: IOPS=2268, BW=9075KiB/s (9293kB/s)(9084KiB/1001msec) 00:12:25.681 slat (nsec): min=12448, max=84155, avg=16475.17, stdev=5214.04 00:12:25.681 clat (usec): min=168, max=8080, avg=231.34, stdev=271.72 00:12:25.681 lat (usec): min=184, max=8104, avg=247.81, stdev=272.34 00:12:25.681 clat percentiles (usec): 00:12:25.681 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:12:25.681 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 221], 00:12:25.681 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 265], 00:12:25.681 | 99.00th=[ 293], 99.50th=[ 314], 99.90th=[ 3884], 99.95th=[ 7570], 00:12:25.681 | 99.99th=[ 8094] 00:12:25.681 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:25.681 slat (usec): min=17, max=153, avg=23.58, stdev= 7.31 00:12:25.681 clat (usec): min=111, max=5454, avg=143.44, stdev=107.10 00:12:25.681 lat (usec): min=130, max=5478, avg=167.02, stdev=107.62 00:12:25.681 clat percentiles (usec): 00:12:25.681 | 1.00th=[ 115], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 125], 00:12:25.681 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 143], 00:12:25.681 | 70.00th=[ 149], 80.00th=[ 157], 90.00th=[ 169], 95.00th=[ 182], 00:12:25.681 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 306], 99.95th=[ 457], 00:12:25.681 | 99.99th=[ 5473] 00:12:25.681 bw ( KiB/s): min=10283, max=10283, per=100.00%, avg=10283.00, stdev= 0.00, samples=1 00:12:25.681 iops : min= 2570, max= 2570, avg=2570.00, stdev= 0.00, samples=1 00:12:25.681 lat (usec) : 250=94.80%, 500=5.01%, 1000=0.02% 00:12:25.681 lat (msec) : 4=0.10%, 10=0.06% 00:12:25.681 cpu : usr=2.00%, sys=7.90%, ctx=4833, majf=0, minf=5 00:12:25.681 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:25.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.681 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.681 issued rwts: total=2271,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.681 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:25.681 00:12:25.681 Run status group 0 (all jobs): 00:12:25.681 READ: bw=9075KiB/s (9293kB/s), 9075KiB/s-9075KiB/s (9293kB/s-9293kB/s), io=9084KiB (9302kB), run=1001-1001msec 00:12:25.681 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:12:25.681 00:12:25.681 Disk stats (read/write): 00:12:25.681 nvme0n1: ios=2098/2260, merge=0/0, ticks=496/372, in_queue=868, util=90.48% 00:12:25.681 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:25.681 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.681 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:12:25.681 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:25.681 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.681 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:25.681 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.681 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:12:25.681 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:25.681 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:25.681 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:25.681 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:25.681 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:25.681 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:25.682 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:25.682 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:25.682 rmmod nvme_tcp 00:12:25.682 rmmod nvme_fabrics 00:12:25.682 rmmod nvme_keyring 00:12:25.682 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:25.682 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:25.682 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:25.682 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 68534 ']' 00:12:25.682 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 68534 00:12:25.682 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 68534 ']' 00:12:25.682 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 68534 00:12:25.682 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:12:25.682 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:25.682 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68534 00:12:25.682 killing process with pid 68534 00:12:25.682 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:25.682 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:25.682 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68534' 00:12:25.682 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 68534 00:12:25.682 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 68534 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.091 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.091 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:12:27.091 00:12:27.091 real 0m7.258s 00:12:27.091 user 0m20.928s 00:12:27.091 sys 0m2.494s 00:12:27.092 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.092 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:27.092 ************************************ 00:12:27.092 END TEST nvmf_nmic 00:12:27.092 ************************************ 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:27.352 ************************************ 00:12:27.352 START TEST nvmf_fio_target 00:12:27.352 ************************************ 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:27.352 * Looking for test storage... 00:12:27.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:27.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.352 --rc genhtml_branch_coverage=1 00:12:27.352 --rc genhtml_function_coverage=1 00:12:27.352 --rc genhtml_legend=1 00:12:27.352 --rc geninfo_all_blocks=1 00:12:27.352 --rc geninfo_unexecuted_blocks=1 00:12:27.352 00:12:27.352 ' 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:27.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.352 --rc genhtml_branch_coverage=1 00:12:27.352 --rc genhtml_function_coverage=1 00:12:27.352 --rc genhtml_legend=1 00:12:27.352 --rc geninfo_all_blocks=1 00:12:27.352 --rc geninfo_unexecuted_blocks=1 00:12:27.352 00:12:27.352 ' 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:27.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.352 --rc genhtml_branch_coverage=1 00:12:27.352 --rc genhtml_function_coverage=1 00:12:27.352 --rc genhtml_legend=1 00:12:27.352 --rc geninfo_all_blocks=1 00:12:27.352 --rc geninfo_unexecuted_blocks=1 00:12:27.352 00:12:27.352 ' 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:27.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.352 --rc genhtml_branch_coverage=1 00:12:27.352 --rc genhtml_function_coverage=1 00:12:27.352 --rc genhtml_legend=1 00:12:27.352 --rc geninfo_all_blocks=1 00:12:27.352 --rc geninfo_unexecuted_blocks=1 00:12:27.352 00:12:27.352 ' 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.352 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:27.612 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:27.612 Cannot find device "nvmf_init_br" 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:27.612 Cannot find device "nvmf_init_br2" 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:27.612 Cannot find device "nvmf_tgt_br" 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:27.612 Cannot find device "nvmf_tgt_br2" 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:27.612 Cannot find device "nvmf_init_br" 00:12:27.612 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:27.613 Cannot find device "nvmf_init_br2" 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:27.613 Cannot find device "nvmf_tgt_br" 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:27.613 Cannot find device "nvmf_tgt_br2" 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:27.613 Cannot find device "nvmf_br" 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:27.613 Cannot find device "nvmf_init_if" 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:27.613 Cannot find device "nvmf_init_if2" 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:27.613 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:27.613 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:27.613 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:27.873 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:27.873 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:12:27.873 00:12:27.873 --- 10.0.0.3 ping statistics --- 00:12:27.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.873 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:27.873 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:27.873 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:12:27.873 00:12:27.873 --- 10.0.0.4 ping statistics --- 00:12:27.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.873 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:27.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:27.873 00:12:27.873 --- 10.0.0.1 ping statistics --- 00:12:27.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.873 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:27.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:12:27.873 00:12:27.873 --- 10.0.0.2 ping statistics --- 00:12:27.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.873 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # return 0 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=68865 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 68865 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 68865 ']' 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:27.873 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.133 [2024-09-28 01:25:23.835748] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:28.133 [2024-09-28 01:25:23.836172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.133 [2024-09-28 01:25:24.014569] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.393 [2024-09-28 01:25:24.249640] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.393 [2024-09-28 01:25:24.249717] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.393 [2024-09-28 01:25:24.249744] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.393 [2024-09-28 01:25:24.249759] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.393 [2024-09-28 01:25:24.249776] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.393 [2024-09-28 01:25:24.249982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.393 [2024-09-28 01:25:24.250772] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.393 [2024-09-28 01:25:24.250941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.393 [2024-09-28 01:25:24.250964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.651 [2024-09-28 01:25:24.439239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:28.910 01:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:28.910 01:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:12:28.910 01:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:28.910 01:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:28.910 01:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.169 01:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.169 01:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:29.428 [2024-09-28 01:25:25.135341] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.428 01:25:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:29.687 01:25:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:29.687 01:25:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:29.946 01:25:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:29.946 01:25:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:30.205 01:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:30.463 01:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:30.722 01:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:30.722 01:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:30.980 01:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:31.239 01:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:31.239 01:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:31.497 01:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:31.497 01:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:32.066 01:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:32.066 01:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:32.325 01:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:32.584 01:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:32.584 01:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:32.843 01:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:32.843 01:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:33.102 01:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:33.361 [2024-09-28 01:25:29.058611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:33.361 01:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:33.620 01:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:33.879 01:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:33.879 01:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:33.879 01:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:33.879 01:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.879 01:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:33.879 01:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:33.879 01:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:35.812 01:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:35.812 01:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:35.812 01:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.812 01:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:35.812 01:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.812 01:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:35.812 01:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:35.812 [global] 00:12:35.812 thread=1 00:12:35.812 invalidate=1 00:12:35.812 rw=write 00:12:35.812 time_based=1 00:12:35.812 runtime=1 00:12:35.812 ioengine=libaio 00:12:35.812 direct=1 00:12:35.812 bs=4096 00:12:35.812 iodepth=1 00:12:35.812 norandommap=0 00:12:35.812 numjobs=1 00:12:35.812 00:12:35.812 verify_dump=1 00:12:35.812 verify_backlog=512 00:12:35.812 verify_state_save=0 00:12:35.812 do_verify=1 00:12:35.812 verify=crc32c-intel 00:12:35.812 [job0] 00:12:35.812 filename=/dev/nvme0n1 00:12:35.812 [job1] 00:12:35.812 filename=/dev/nvme0n2 00:12:36.071 [job2] 00:12:36.071 filename=/dev/nvme0n3 00:12:36.071 [job3] 00:12:36.071 filename=/dev/nvme0n4 00:12:36.071 Could not set queue depth (nvme0n1) 00:12:36.071 Could not set queue depth (nvme0n2) 00:12:36.071 Could not set queue depth (nvme0n3) 00:12:36.071 Could not set queue depth (nvme0n4) 00:12:36.071 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:36.071 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:36.071 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:36.071 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:36.071 fio-3.35 00:12:36.071 Starting 4 threads 00:12:37.448 00:12:37.448 job0: (groupid=0, jobs=1): err= 0: pid=69060: Sat Sep 28 01:25:33 2024 00:12:37.448 read: IOPS=2063, BW=8256KiB/s (8454kB/s)(8264KiB/1001msec) 00:12:37.448 slat (nsec): min=8472, max=58709, avg=15235.52, stdev=4391.18 00:12:37.448 clat (usec): min=160, max=478, avg=214.86, stdev=56.38 00:12:37.448 lat (usec): min=172, max=492, avg=230.10, stdev=56.76 00:12:37.448 clat percentiles (usec): 00:12:37.448 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:12:37.448 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 206], 00:12:37.448 | 70.00th=[ 212], 80.00th=[ 225], 90.00th=[ 258], 95.00th=[ 383], 00:12:37.448 | 99.00th=[ 424], 99.50th=[ 433], 99.90th=[ 457], 99.95th=[ 478], 00:12:37.448 | 99.99th=[ 478] 00:12:37.448 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:37.448 slat (usec): min=14, max=131, avg=23.10, stdev= 7.27 00:12:37.448 clat (usec): min=109, max=993, avg=178.37, stdev=78.75 00:12:37.448 lat (usec): min=127, max=1058, avg=201.47, stdev=80.31 00:12:37.448 clat percentiles (usec): 00:12:37.448 | 1.00th=[ 116], 5.00th=[ 122], 10.00th=[ 126], 20.00th=[ 133], 00:12:37.448 | 30.00th=[ 139], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 157], 00:12:37.448 | 70.00th=[ 167], 80.00th=[ 184], 90.00th=[ 338], 95.00th=[ 367], 00:12:37.448 | 99.00th=[ 437], 99.50th=[ 453], 99.90th=[ 490], 99.95th=[ 529], 00:12:37.448 | 99.99th=[ 996] 00:12:37.448 bw ( KiB/s): min=12288, max=12288, per=37.54%, avg=12288.00, stdev= 0.00, samples=1 00:12:37.448 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:37.448 lat (usec) : 250=87.35%, 500=12.60%, 750=0.02%, 1000=0.02% 00:12:37.448 cpu : usr=2.20%, sys=7.20%, ctx=4630, majf=0, minf=3 00:12:37.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:37.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.448 issued rwts: total=2066,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:37.448 job1: (groupid=0, jobs=1): err= 0: pid=69061: Sat Sep 28 01:25:33 2024 00:12:37.448 read: IOPS=1499, BW=5998KiB/s (6142kB/s)(6004KiB/1001msec) 00:12:37.448 slat (nsec): min=9327, max=98678, avg=19879.33, stdev=7787.27 00:12:37.448 clat (usec): min=178, max=540, avg=353.35, stdev=37.75 00:12:37.448 lat (usec): min=208, max=555, avg=373.23, stdev=37.50 00:12:37.448 clat percentiles (usec): 00:12:37.448 | 1.00th=[ 277], 5.00th=[ 302], 10.00th=[ 314], 20.00th=[ 326], 00:12:37.448 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 359], 00:12:37.448 | 70.00th=[ 367], 80.00th=[ 379], 90.00th=[ 396], 95.00th=[ 416], 00:12:37.448 | 99.00th=[ 482], 99.50th=[ 502], 99.90th=[ 537], 99.95th=[ 537], 00:12:37.448 | 99.99th=[ 537] 00:12:37.448 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:37.448 slat (nsec): min=12998, max=72184, avg=24508.22, stdev=7088.91 00:12:37.448 clat (usec): min=138, max=2012, avg=257.50, stdev=61.13 00:12:37.448 lat (usec): min=159, max=2036, avg=282.01, stdev=62.26 00:12:37.448 clat percentiles (usec): 00:12:37.448 | 1.00th=[ 153], 5.00th=[ 169], 10.00th=[ 186], 20.00th=[ 235], 00:12:37.448 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:12:37.448 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 310], 00:12:37.448 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 578], 99.95th=[ 2008], 00:12:37.448 | 99.99th=[ 2008] 00:12:37.448 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:12:37.448 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:37.448 lat (usec) : 250=17.19%, 500=82.48%, 750=0.30% 00:12:37.448 lat (msec) : 4=0.03% 00:12:37.448 cpu : usr=1.10%, sys=6.30%, ctx=3076, majf=0, minf=7 00:12:37.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:37.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.448 issued rwts: total=1501,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:37.448 job2: (groupid=0, jobs=1): err= 0: pid=69062: Sat Sep 28 01:25:33 2024 00:12:37.448 read: IOPS=1479, BW=5918KiB/s (6060kB/s)(5924KiB/1001msec) 00:12:37.448 slat (usec): min=9, max=101, avg=15.18, stdev= 6.15 00:12:37.448 clat (usec): min=195, max=3306, avg=353.63, stdev=89.83 00:12:37.448 lat (usec): min=213, max=3331, avg=368.81, stdev=89.76 00:12:37.448 clat percentiles (usec): 00:12:37.448 | 1.00th=[ 208], 5.00th=[ 249], 10.00th=[ 310], 20.00th=[ 326], 00:12:37.448 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 359], 00:12:37.448 | 70.00th=[ 371], 80.00th=[ 379], 90.00th=[ 396], 95.00th=[ 416], 00:12:37.448 | 99.00th=[ 490], 99.50th=[ 510], 99.90th=[ 553], 99.95th=[ 3294], 00:12:37.448 | 99.99th=[ 3294] 00:12:37.448 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:37.448 slat (usec): min=16, max=103, avg=24.77, stdev= 8.22 00:12:37.448 clat (usec): min=140, max=7314, avg=266.78, stdev=231.25 00:12:37.448 lat (usec): min=165, max=7337, avg=291.54, stdev=231.33 00:12:37.448 clat percentiles (usec): 00:12:37.448 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 169], 20.00th=[ 237], 00:12:37.448 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:12:37.448 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 310], 00:12:37.448 | 99.00th=[ 347], 99.50th=[ 388], 99.90th=[ 4047], 99.95th=[ 7308], 00:12:37.448 | 99.99th=[ 7308] 00:12:37.448 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:12:37.448 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:37.448 lat (usec) : 250=18.43%, 500=81.04%, 750=0.30%, 1000=0.03% 00:12:37.448 lat (msec) : 2=0.03%, 4=0.10%, 10=0.07% 00:12:37.448 cpu : usr=0.90%, sys=5.70%, ctx=3056, majf=0, minf=12 00:12:37.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:37.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.448 issued rwts: total=1481,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:37.448 job3: (groupid=0, jobs=1): err= 0: pid=69063: Sat Sep 28 01:25:33 2024 00:12:37.448 read: IOPS=2308, BW=9235KiB/s (9456kB/s)(9244KiB/1001msec) 00:12:37.448 slat (nsec): min=11919, max=57978, avg=14963.17, stdev=4390.31 00:12:37.448 clat (usec): min=173, max=321, avg=211.82, stdev=21.46 00:12:37.448 lat (usec): min=186, max=336, avg=226.79, stdev=22.38 00:12:37.448 clat percentiles (usec): 00:12:37.448 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:12:37.448 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 217], 00:12:37.448 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 251], 00:12:37.448 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 314], 99.95th=[ 314], 00:12:37.448 | 99.99th=[ 322] 00:12:37.448 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:37.448 slat (usec): min=14, max=116, avg=22.14, stdev= 6.69 00:12:37.448 clat (usec): min=118, max=2442, avg=160.31, stdev=51.18 00:12:37.448 lat (usec): min=135, max=2467, avg=182.45, stdev=51.92 00:12:37.448 clat percentiles (usec): 00:12:37.448 | 1.00th=[ 126], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:12:37.448 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 161], 00:12:37.448 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 198], 00:12:37.448 | 99.00th=[ 221], 99.50th=[ 231], 99.90th=[ 490], 99.95th=[ 717], 00:12:37.448 | 99.99th=[ 2442] 00:12:37.448 bw ( KiB/s): min=10912, max=10912, per=33.33%, avg=10912.00, stdev= 0.00, samples=1 00:12:37.448 iops : min= 2728, max= 2728, avg=2728.00, stdev= 0.00, samples=1 00:12:37.448 lat (usec) : 250=97.27%, 500=2.69%, 750=0.02% 00:12:37.448 lat (msec) : 4=0.02% 00:12:37.448 cpu : usr=2.10%, sys=7.20%, ctx=4872, majf=0, minf=17 00:12:37.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:37.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.448 issued rwts: total=2311,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:37.448 00:12:37.448 Run status group 0 (all jobs): 00:12:37.448 READ: bw=28.7MiB/s (30.1MB/s), 5918KiB/s-9235KiB/s (6060kB/s-9456kB/s), io=28.7MiB (30.1MB), run=1001-1001msec 00:12:37.448 WRITE: bw=32.0MiB/s (33.5MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:12:37.448 00:12:37.448 Disk stats (read/write): 00:12:37.448 nvme0n1: ios=2098/2221, merge=0/0, ticks=486/346, in_queue=832, util=88.47% 00:12:37.448 nvme0n2: ios=1197/1536, merge=0/0, ticks=436/395, in_queue=831, util=88.95% 00:12:37.448 nvme0n3: ios=1131/1536, merge=0/0, ticks=358/377, in_queue=735, util=88.31% 00:12:37.448 nvme0n4: ios=2048/2094, merge=0/0, ticks=451/360, in_queue=811, util=89.80% 00:12:37.448 01:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:37.448 [global] 00:12:37.448 thread=1 00:12:37.448 invalidate=1 00:12:37.448 rw=randwrite 00:12:37.448 time_based=1 00:12:37.448 runtime=1 00:12:37.448 ioengine=libaio 00:12:37.448 direct=1 00:12:37.448 bs=4096 00:12:37.448 iodepth=1 00:12:37.448 norandommap=0 00:12:37.448 numjobs=1 00:12:37.448 00:12:37.448 verify_dump=1 00:12:37.448 verify_backlog=512 00:12:37.448 verify_state_save=0 00:12:37.448 do_verify=1 00:12:37.448 verify=crc32c-intel 00:12:37.448 [job0] 00:12:37.448 filename=/dev/nvme0n1 00:12:37.448 [job1] 00:12:37.448 filename=/dev/nvme0n2 00:12:37.448 [job2] 00:12:37.448 filename=/dev/nvme0n3 00:12:37.448 [job3] 00:12:37.448 filename=/dev/nvme0n4 00:12:37.448 Could not set queue depth (nvme0n1) 00:12:37.448 Could not set queue depth (nvme0n2) 00:12:37.449 Could not set queue depth (nvme0n3) 00:12:37.449 Could not set queue depth (nvme0n4) 00:12:37.449 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:37.449 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:37.449 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:37.449 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:37.449 fio-3.35 00:12:37.449 Starting 4 threads 00:12:38.825 00:12:38.825 job0: (groupid=0, jobs=1): err= 0: pid=69122: Sat Sep 28 01:25:34 2024 00:12:38.825 read: IOPS=1448, BW=5794KiB/s (5933kB/s)(5800KiB/1001msec) 00:12:38.825 slat (nsec): min=11955, max=79993, avg=20872.52, stdev=7752.09 00:12:38.825 clat (usec): min=173, max=2681, avg=394.36, stdev=172.62 00:12:38.825 lat (usec): min=186, max=2699, avg=415.24, stdev=177.29 00:12:38.825 clat percentiles (usec): 00:12:38.825 | 1.00th=[ 208], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 297], 00:12:38.825 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 343], 00:12:38.825 | 70.00th=[ 396], 80.00th=[ 437], 90.00th=[ 635], 95.00th=[ 840], 00:12:38.825 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 947], 99.95th=[ 2671], 00:12:38.825 | 99.99th=[ 2671] 00:12:38.825 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:38.825 slat (nsec): min=15825, max=88152, avg=28925.88, stdev=7852.96 00:12:38.825 clat (usec): min=124, max=651, avg=224.81, stdev=65.76 00:12:38.825 lat (usec): min=145, max=681, avg=253.73, stdev=66.77 00:12:38.825 clat percentiles (usec): 00:12:38.825 | 1.00th=[ 135], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 163], 00:12:38.825 | 30.00th=[ 178], 40.00th=[ 196], 50.00th=[ 221], 60.00th=[ 237], 00:12:38.825 | 70.00th=[ 251], 80.00th=[ 273], 90.00th=[ 310], 95.00th=[ 355], 00:12:38.825 | 99.00th=[ 408], 99.50th=[ 416], 99.90th=[ 502], 99.95th=[ 652], 00:12:38.825 | 99.99th=[ 652] 00:12:38.825 bw ( KiB/s): min= 8192, max= 8192, per=26.84%, avg=8192.00, stdev= 0.00, samples=1 00:12:38.825 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:38.825 lat (usec) : 250=36.10%, 500=57.17%, 750=2.51%, 1000=4.19% 00:12:38.825 lat (msec) : 4=0.03% 00:12:38.825 cpu : usr=1.70%, sys=6.40%, ctx=2994, majf=0, minf=9 00:12:38.825 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:38.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.825 issued rwts: total=1450,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.825 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:38.825 job1: (groupid=0, jobs=1): err= 0: pid=69123: Sat Sep 28 01:25:34 2024 00:12:38.825 read: IOPS=1063, BW=4256KiB/s (4358kB/s)(4260KiB/1001msec) 00:12:38.825 slat (nsec): min=12247, max=75175, avg=23559.24, stdev=10102.02 00:12:38.825 clat (usec): min=180, max=1295, avg=433.40, stdev=113.36 00:12:38.825 lat (usec): min=197, max=1313, avg=456.95, stdev=115.15 00:12:38.825 clat percentiles (usec): 00:12:38.825 | 1.00th=[ 229], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 318], 00:12:38.825 | 30.00th=[ 338], 40.00th=[ 396], 50.00th=[ 433], 60.00th=[ 457], 00:12:38.825 | 70.00th=[ 502], 80.00th=[ 537], 90.00th=[ 586], 95.00th=[ 619], 00:12:38.825 | 99.00th=[ 668], 99.50th=[ 693], 99.90th=[ 758], 99.95th=[ 1303], 00:12:38.825 | 99.99th=[ 1303] 00:12:38.825 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:38.825 slat (usec): min=18, max=326, avg=35.77, stdev=12.79 00:12:38.825 clat (usec): min=132, max=2673, avg=293.04, stdev=97.10 00:12:38.825 lat (usec): min=157, max=2708, avg=328.81, stdev=101.77 00:12:38.825 clat percentiles (usec): 00:12:38.825 | 1.00th=[ 141], 5.00th=[ 165], 10.00th=[ 219], 20.00th=[ 239], 00:12:38.825 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 289], 00:12:38.825 | 70.00th=[ 302], 80.00th=[ 347], 90.00th=[ 416], 95.00th=[ 437], 00:12:38.825 | 99.00th=[ 478], 99.50th=[ 506], 99.90th=[ 611], 99.95th=[ 2671], 00:12:38.825 | 99.99th=[ 2671] 00:12:38.825 bw ( KiB/s): min= 6376, max= 6376, per=20.89%, avg=6376.00, stdev= 0.00, samples=1 00:12:38.826 iops : min= 1594, max= 1594, avg=1594.00, stdev= 0.00, samples=1 00:12:38.826 lat (usec) : 250=15.57%, 500=71.78%, 750=12.53%, 1000=0.04% 00:12:38.826 lat (msec) : 2=0.04%, 4=0.04% 00:12:38.826 cpu : usr=1.30%, sys=7.30%, ctx=2601, majf=0, minf=15 00:12:38.826 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:38.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.826 issued rwts: total=1065,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.826 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:38.826 job2: (groupid=0, jobs=1): err= 0: pid=69124: Sat Sep 28 01:25:34 2024 00:12:38.826 read: IOPS=2420, BW=9682KiB/s (9915kB/s)(9692KiB/1001msec) 00:12:38.826 slat (nsec): min=11292, max=58592, avg=13465.86, stdev=3882.88 00:12:38.826 clat (usec): min=173, max=303, avg=207.19, stdev=18.44 00:12:38.826 lat (usec): min=185, max=316, avg=220.65, stdev=18.92 00:12:38.826 clat percentiles (usec): 00:12:38.826 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:12:38.826 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 210], 00:12:38.826 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 241], 00:12:38.826 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 289], 99.95th=[ 293], 00:12:38.826 | 99.99th=[ 306] 00:12:38.826 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:38.826 slat (nsec): min=14679, max=87188, avg=20881.28, stdev=5714.38 00:12:38.826 clat (usec): min=121, max=512, avg=157.36, stdev=22.62 00:12:38.826 lat (usec): min=140, max=530, avg=178.25, stdev=23.95 00:12:38.826 clat percentiles (usec): 00:12:38.826 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:12:38.826 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 155], 60.00th=[ 159], 00:12:38.826 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 186], 95.00th=[ 196], 00:12:38.826 | 99.00th=[ 219], 99.50th=[ 227], 99.90th=[ 359], 99.95th=[ 408], 00:12:38.826 | 99.99th=[ 515] 00:12:38.826 bw ( KiB/s): min=12024, max=12024, per=39.40%, avg=12024.00, stdev= 0.00, samples=1 00:12:38.826 iops : min= 3006, max= 3006, avg=3006.00, stdev= 0.00, samples=1 00:12:38.826 lat (usec) : 250=98.64%, 500=1.34%, 750=0.02% 00:12:38.826 cpu : usr=2.10%, sys=7.10%, ctx=4983, majf=0, minf=11 00:12:38.826 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:38.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.826 issued rwts: total=2423,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.826 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:38.826 job3: (groupid=0, jobs=1): err= 0: pid=69125: Sat Sep 28 01:25:34 2024 00:12:38.826 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:38.826 slat (nsec): min=11464, max=74381, avg=17882.38, stdev=5820.11 00:12:38.826 clat (usec): min=186, max=7245, avg=321.89, stdev=226.02 00:12:38.826 lat (usec): min=200, max=7267, avg=339.77, stdev=227.87 00:12:38.826 clat percentiles (usec): 00:12:38.826 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 215], 00:12:38.826 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 253], 00:12:38.826 | 70.00th=[ 392], 80.00th=[ 461], 90.00th=[ 553], 95.00th=[ 594], 00:12:38.826 | 99.00th=[ 652], 99.50th=[ 685], 99.90th=[ 1385], 99.95th=[ 7242], 00:12:38.826 | 99.99th=[ 7242] 00:12:38.826 write: IOPS=2003, BW=8016KiB/s (8208kB/s)(8024KiB/1001msec); 0 zone resets 00:12:38.826 slat (nsec): min=15044, max=84760, avg=23904.90, stdev=6476.92 00:12:38.826 clat (usec): min=138, max=7984, avg=210.77, stdev=248.81 00:12:38.826 lat (usec): min=157, max=8012, avg=234.67, stdev=248.79 00:12:38.826 clat percentiles (usec): 00:12:38.826 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:12:38.826 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 180], 60.00th=[ 190], 00:12:38.826 | 70.00th=[ 204], 80.00th=[ 239], 90.00th=[ 289], 95.00th=[ 326], 00:12:38.826 | 99.00th=[ 445], 99.50th=[ 490], 99.90th=[ 2704], 99.95th=[ 7242], 00:12:38.826 | 99.99th=[ 7963] 00:12:38.826 bw ( KiB/s): min= 9784, max= 9784, per=32.06%, avg=9784.00, stdev= 0.00, samples=1 00:12:38.826 iops : min= 2446, max= 2446, avg=2446.00, stdev= 0.00, samples=1 00:12:38.826 lat (usec) : 250=72.05%, 500=20.24%, 750=7.48%, 1000=0.08% 00:12:38.826 lat (msec) : 2=0.03%, 4=0.03%, 10=0.08% 00:12:38.826 cpu : usr=1.60%, sys=6.40%, ctx=3542, majf=0, minf=11 00:12:38.826 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:38.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.826 issued rwts: total=1536,2006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.826 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:38.826 00:12:38.826 Run status group 0 (all jobs): 00:12:38.826 READ: bw=25.3MiB/s (26.5MB/s), 4256KiB/s-9682KiB/s (4358kB/s-9915kB/s), io=25.3MiB (26.5MB), run=1001-1001msec 00:12:38.826 WRITE: bw=29.8MiB/s (31.3MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=29.8MiB (31.3MB), run=1001-1001msec 00:12:38.826 00:12:38.826 Disk stats (read/write): 00:12:38.826 nvme0n1: ios=1305/1536, merge=0/0, ticks=464/365, in_queue=829, util=87.58% 00:12:38.826 nvme0n2: ios=1067/1152, merge=0/0, ticks=466/358, in_queue=824, util=88.93% 00:12:38.826 nvme0n3: ios=2048/2239, merge=0/0, ticks=439/380, in_queue=819, util=89.13% 00:12:38.826 nvme0n4: ios=1536/1559, merge=0/0, ticks=484/294, in_queue=778, util=88.63% 00:12:38.826 01:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:38.826 [global] 00:12:38.826 thread=1 00:12:38.826 invalidate=1 00:12:38.826 rw=write 00:12:38.826 time_based=1 00:12:38.826 runtime=1 00:12:38.826 ioengine=libaio 00:12:38.826 direct=1 00:12:38.826 bs=4096 00:12:38.826 iodepth=128 00:12:38.826 norandommap=0 00:12:38.826 numjobs=1 00:12:38.826 00:12:38.826 verify_dump=1 00:12:38.826 verify_backlog=512 00:12:38.826 verify_state_save=0 00:12:38.826 do_verify=1 00:12:38.826 verify=crc32c-intel 00:12:38.826 [job0] 00:12:38.826 filename=/dev/nvme0n1 00:12:38.826 [job1] 00:12:38.826 filename=/dev/nvme0n2 00:12:38.826 [job2] 00:12:38.826 filename=/dev/nvme0n3 00:12:38.826 [job3] 00:12:38.826 filename=/dev/nvme0n4 00:12:38.826 Could not set queue depth (nvme0n1) 00:12:38.826 Could not set queue depth (nvme0n2) 00:12:38.826 Could not set queue depth (nvme0n3) 00:12:38.826 Could not set queue depth (nvme0n4) 00:12:38.826 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:38.826 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:38.826 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:38.826 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:38.826 fio-3.35 00:12:38.826 Starting 4 threads 00:12:40.204 00:12:40.204 job0: (groupid=0, jobs=1): err= 0: pid=69180: Sat Sep 28 01:25:35 2024 00:12:40.204 read: IOPS=2005, BW=8024KiB/s (8216kB/s)(8096KiB/1009msec) 00:12:40.204 slat (usec): min=3, max=10385, avg=279.41, stdev=1022.13 00:12:40.204 clat (usec): min=6710, max=48659, avg=35514.29, stdev=6591.61 00:12:40.204 lat (usec): min=11124, max=48981, avg=35793.70, stdev=6611.92 00:12:40.204 clat percentiles (usec): 00:12:40.204 | 1.00th=[13042], 5.00th=[24511], 10.00th=[27132], 20.00th=[29492], 00:12:40.204 | 30.00th=[32113], 40.00th=[35390], 50.00th=[36963], 60.00th=[38011], 00:12:40.204 | 70.00th=[39584], 80.00th=[41157], 90.00th=[42730], 95.00th=[45351], 00:12:40.204 | 99.00th=[47449], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:12:40.204 | 99.99th=[48497] 00:12:40.204 write: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec); 0 zone resets 00:12:40.204 slat (usec): min=8, max=8893, avg=204.88, stdev=828.62 00:12:40.204 clat (usec): min=15439, max=41037, avg=27008.62, stdev=5820.14 00:12:40.204 lat (usec): min=15524, max=41051, avg=27213.50, stdev=5854.20 00:12:40.204 clat percentiles (usec): 00:12:40.204 | 1.00th=[17171], 5.00th=[19530], 10.00th=[19530], 20.00th=[20579], 00:12:40.204 | 30.00th=[22938], 40.00th=[25297], 50.00th=[27132], 60.00th=[28181], 00:12:40.204 | 70.00th=[30016], 80.00th=[32113], 90.00th=[35914], 95.00th=[37487], 00:12:40.204 | 99.00th=[40109], 99.50th=[40109], 99.90th=[41157], 99.95th=[41157], 00:12:40.204 | 99.99th=[41157] 00:12:40.204 bw ( KiB/s): min= 7960, max= 8408, per=17.20%, avg=8184.00, stdev=316.78, samples=2 00:12:40.204 iops : min= 1990, max= 2102, avg=2046.00, stdev=79.20, samples=2 00:12:40.204 lat (msec) : 10=0.02%, 20=8.79%, 50=91.18% 00:12:40.204 cpu : usr=2.58%, sys=5.56%, ctx=676, majf=0, minf=12 00:12:40.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:12:40.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:40.204 issued rwts: total=2024,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:40.204 job1: (groupid=0, jobs=1): err= 0: pid=69181: Sat Sep 28 01:25:35 2024 00:12:40.204 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:12:40.204 slat (usec): min=3, max=11534, avg=178.02, stdev=755.98 00:12:40.204 clat (usec): min=10440, max=51475, avg=23521.35, stdev=12035.96 00:12:40.204 lat (usec): min=10448, max=53658, avg=23699.38, stdev=12133.52 00:12:40.204 clat percentiles (usec): 00:12:40.204 | 1.00th=[11076], 5.00th=[11731], 10.00th=[12649], 20.00th=[13304], 00:12:40.204 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14746], 60.00th=[29492], 00:12:40.204 | 70.00th=[34866], 80.00th=[36963], 90.00th=[39584], 95.00th=[41681], 00:12:40.204 | 99.00th=[47973], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:12:40.204 | 99.99th=[51643] 00:12:40.204 write: IOPS=3025, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1009msec); 0 zone resets 00:12:40.204 slat (usec): min=10, max=8806, avg=171.14, stdev=683.80 00:12:40.204 clat (usec): min=7996, max=44043, avg=22001.54, stdev=9140.55 00:12:40.204 lat (usec): min=10918, max=44109, avg=22172.68, stdev=9208.53 00:12:40.204 clat percentiles (usec): 00:12:40.204 | 1.00th=[11076], 5.00th=[12387], 10.00th=[12780], 20.00th=[13042], 00:12:40.204 | 30.00th=[13304], 40.00th=[13829], 50.00th=[18744], 60.00th=[26346], 00:12:40.204 | 70.00th=[29230], 80.00th=[31851], 90.00th=[34866], 95.00th=[36439], 00:12:40.204 | 99.00th=[40109], 99.50th=[40633], 99.90th=[41157], 99.95th=[42206], 00:12:40.204 | 99.99th=[44303] 00:12:40.204 bw ( KiB/s): min= 7009, max=16384, per=24.59%, avg=11696.50, stdev=6629.13, samples=2 00:12:40.204 iops : min= 1752, max= 4096, avg=2924.00, stdev=1657.46, samples=2 00:12:40.204 lat (msec) : 10=0.02%, 20=54.34%, 50=45.25%, 100=0.39% 00:12:40.204 cpu : usr=2.78%, sys=7.94%, ctx=596, majf=0, minf=13 00:12:40.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:40.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:40.204 issued rwts: total=2560,3053,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:40.204 job2: (groupid=0, jobs=1): err= 0: pid=69182: Sat Sep 28 01:25:35 2024 00:12:40.204 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:12:40.204 slat (usec): min=5, max=5857, avg=103.49, stdev=414.51 00:12:40.204 clat (usec): min=9273, max=21627, avg=13349.74, stdev=1654.64 00:12:40.204 lat (usec): min=9302, max=22369, avg=13453.23, stdev=1694.24 00:12:40.204 clat percentiles (usec): 00:12:40.204 | 1.00th=[10028], 5.00th=[10945], 10.00th=[11863], 20.00th=[12256], 00:12:40.204 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:12:40.204 | 70.00th=[14091], 80.00th=[14746], 90.00th=[15270], 95.00th=[16057], 00:12:40.204 | 99.00th=[18744], 99.50th=[20317], 99.90th=[21627], 99.95th=[21627], 00:12:40.204 | 99.99th=[21627] 00:12:40.204 write: IOPS=4830, BW=18.9MiB/s (19.8MB/s)(18.9MiB/1004msec); 0 zone resets 00:12:40.204 slat (usec): min=12, max=4036, avg=99.95, stdev=409.86 00:12:40.204 clat (usec): min=3755, max=22908, avg=13457.93, stdev=1782.54 00:12:40.204 lat (usec): min=4330, max=22942, avg=13557.87, stdev=1821.48 00:12:40.204 clat percentiles (usec): 00:12:40.204 | 1.00th=[ 8848], 5.00th=[11469], 10.00th=[11600], 20.00th=[11994], 00:12:40.204 | 30.00th=[12387], 40.00th=[13042], 50.00th=[13435], 60.00th=[13829], 00:12:40.204 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15533], 95.00th=[15926], 00:12:40.204 | 99.00th=[18482], 99.50th=[19268], 99.90th=[22152], 99.95th=[22152], 00:12:40.204 | 99.99th=[22938] 00:12:40.204 bw ( KiB/s): min=18387, max=19321, per=39.64%, avg=18854.00, stdev=660.44, samples=2 00:12:40.204 iops : min= 4596, max= 4830, avg=4713.00, stdev=165.46, samples=2 00:12:40.204 lat (msec) : 4=0.01%, 10=1.35%, 20=98.17%, 50=0.47% 00:12:40.204 cpu : usr=4.19%, sys=13.96%, ctx=511, majf=0, minf=11 00:12:40.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:40.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:40.204 issued rwts: total=4608,4850,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:40.204 job3: (groupid=0, jobs=1): err= 0: pid=69183: Sat Sep 28 01:25:35 2024 00:12:40.204 read: IOPS=1780, BW=7122KiB/s (7293kB/s)(7172KiB/1007msec) 00:12:40.204 slat (usec): min=7, max=10957, avg=278.57, stdev=1009.57 00:12:40.204 clat (usec): min=4362, max=50540, avg=34659.47, stdev=7224.88 00:12:40.204 lat (usec): min=6771, max=52521, avg=34938.04, stdev=7248.38 00:12:40.204 clat percentiles (usec): 00:12:40.204 | 1.00th=[ 8586], 5.00th=[24249], 10.00th=[26608], 20.00th=[29230], 00:12:40.204 | 30.00th=[30016], 40.00th=[32375], 50.00th=[34866], 60.00th=[36963], 00:12:40.204 | 70.00th=[39060], 80.00th=[41157], 90.00th=[43254], 95.00th=[44827], 00:12:40.204 | 99.00th=[47973], 99.50th=[48497], 99.90th=[50594], 99.95th=[50594], 00:12:40.204 | 99.99th=[50594] 00:12:40.204 write: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec); 0 zone resets 00:12:40.204 slat (usec): min=6, max=8121, avg=237.40, stdev=880.71 00:12:40.204 clat (usec): min=15798, max=47132, avg=31250.55, stdev=5312.51 00:12:40.204 lat (usec): min=15819, max=47160, avg=31487.94, stdev=5308.01 00:12:40.204 clat percentiles (usec): 00:12:40.204 | 1.00th=[19530], 5.00th=[21890], 10.00th=[24773], 20.00th=[27132], 00:12:40.204 | 30.00th=[28181], 40.00th=[29492], 50.00th=[31327], 60.00th=[32900], 00:12:40.204 | 70.00th=[34341], 80.00th=[35390], 90.00th=[38536], 95.00th=[40109], 00:12:40.204 | 99.00th=[44303], 99.50th=[44303], 99.90th=[45876], 99.95th=[46924], 00:12:40.204 | 99.99th=[46924] 00:12:40.204 bw ( KiB/s): min= 8175, max= 8192, per=17.20%, avg=8183.50, stdev=12.02, samples=2 00:12:40.204 iops : min= 2043, max= 2048, avg=2045.50, stdev= 3.54, samples=2 00:12:40.204 lat (msec) : 10=0.57%, 20=1.98%, 50=97.40%, 100=0.05% 00:12:40.204 cpu : usr=1.99%, sys=5.86%, ctx=630, majf=0, minf=13 00:12:40.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:12:40.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:40.205 issued rwts: total=1793,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.205 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:40.205 00:12:40.205 Run status group 0 (all jobs): 00:12:40.205 READ: bw=42.5MiB/s (44.6MB/s), 7122KiB/s-17.9MiB/s (7293kB/s-18.8MB/s), io=42.9MiB (45.0MB), run=1004-1009msec 00:12:40.205 WRITE: bw=46.5MiB/s (48.7MB/s), 8119KiB/s-18.9MiB/s (8314kB/s-19.8MB/s), io=46.9MiB (49.1MB), run=1004-1009msec 00:12:40.205 00:12:40.205 Disk stats (read/write): 00:12:40.205 nvme0n1: ios=1585/2025, merge=0/0, ticks=17902/16191, in_queue=34093, util=87.15% 00:12:40.205 nvme0n2: ios=2500/2560, merge=0/0, ticks=17839/15357, in_queue=33196, util=88.52% 00:12:40.205 nvme0n3: ios=3942/4096, merge=0/0, ticks=16965/15732, in_queue=32697, util=89.29% 00:12:40.205 nvme0n4: ios=1536/1729, merge=0/0, ticks=17974/15803, in_queue=33777, util=89.03% 00:12:40.205 01:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:40.205 [global] 00:12:40.205 thread=1 00:12:40.205 invalidate=1 00:12:40.205 rw=randwrite 00:12:40.205 time_based=1 00:12:40.205 runtime=1 00:12:40.205 ioengine=libaio 00:12:40.205 direct=1 00:12:40.205 bs=4096 00:12:40.205 iodepth=128 00:12:40.205 norandommap=0 00:12:40.205 numjobs=1 00:12:40.205 00:12:40.205 verify_dump=1 00:12:40.205 verify_backlog=512 00:12:40.205 verify_state_save=0 00:12:40.205 do_verify=1 00:12:40.205 verify=crc32c-intel 00:12:40.205 [job0] 00:12:40.205 filename=/dev/nvme0n1 00:12:40.205 [job1] 00:12:40.205 filename=/dev/nvme0n2 00:12:40.205 [job2] 00:12:40.205 filename=/dev/nvme0n3 00:12:40.205 [job3] 00:12:40.205 filename=/dev/nvme0n4 00:12:40.205 Could not set queue depth (nvme0n1) 00:12:40.205 Could not set queue depth (nvme0n2) 00:12:40.205 Could not set queue depth (nvme0n3) 00:12:40.205 Could not set queue depth (nvme0n4) 00:12:40.205 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:40.205 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:40.205 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:40.205 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:40.205 fio-3.35 00:12:40.205 Starting 4 threads 00:12:41.584 00:12:41.584 job0: (groupid=0, jobs=1): err= 0: pid=69236: Sat Sep 28 01:25:37 2024 00:12:41.584 read: IOPS=2055, BW=8223KiB/s (8420kB/s)(8264KiB/1005msec) 00:12:41.584 slat (usec): min=12, max=5856, avg=185.11, stdev=766.89 00:12:41.584 clat (usec): min=3522, max=35241, avg=23128.27, stdev=4213.41 00:12:41.584 lat (usec): min=4569, max=36593, avg=23313.38, stdev=4267.31 00:12:41.584 clat percentiles (usec): 00:12:41.584 | 1.00th=[15533], 5.00th=[18220], 10.00th=[18744], 20.00th=[19530], 00:12:41.584 | 30.00th=[19792], 40.00th=[21103], 50.00th=[22938], 60.00th=[25035], 00:12:41.584 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27919], 95.00th=[29492], 00:12:41.584 | 99.00th=[32637], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:12:41.584 | 99.99th=[35390] 00:12:41.584 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:12:41.584 slat (usec): min=13, max=8469, avg=232.53, stdev=799.50 00:12:41.584 clat (usec): min=8904, max=58749, avg=30674.69, stdev=11660.78 00:12:41.584 lat (usec): min=8925, max=58773, avg=30907.22, stdev=11733.35 00:12:41.584 clat percentiles (usec): 00:12:41.584 | 1.00th=[12780], 5.00th=[13829], 10.00th=[15926], 20.00th=[19530], 00:12:41.584 | 30.00th=[22152], 40.00th=[25297], 50.00th=[32375], 60.00th=[34341], 00:12:41.584 | 70.00th=[35390], 80.00th=[40109], 90.00th=[48497], 95.00th=[51643], 00:12:41.584 | 99.00th=[56361], 99.50th=[57934], 99.90th=[58983], 99.95th=[58983], 00:12:41.584 | 99.99th=[58983] 00:12:41.584 bw ( KiB/s): min= 7744, max=11879, per=21.00%, avg=9811.50, stdev=2923.89, samples=2 00:12:41.584 iops : min= 1936, max= 2969, avg=2452.50, stdev=730.44, samples=2 00:12:41.584 lat (msec) : 4=0.02%, 10=0.52%, 20=27.67%, 50=67.86%, 100=3.93% 00:12:41.584 cpu : usr=2.99%, sys=7.67%, ctx=311, majf=0, minf=13 00:12:41.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:12:41.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:41.584 issued rwts: total=2066,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:41.584 job1: (groupid=0, jobs=1): err= 0: pid=69237: Sat Sep 28 01:25:37 2024 00:12:41.584 read: IOPS=3168, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1003msec) 00:12:41.584 slat (usec): min=6, max=9798, avg=155.16, stdev=739.26 00:12:41.584 clat (usec): min=2180, max=40059, avg=20116.36, stdev=5803.27 00:12:41.584 lat (usec): min=3808, max=40085, avg=20271.52, stdev=5793.96 00:12:41.584 clat percentiles (usec): 00:12:41.584 | 1.00th=[ 6521], 5.00th=[14353], 10.00th=[15401], 20.00th=[16909], 00:12:41.584 | 30.00th=[17171], 40.00th=[17171], 50.00th=[17433], 60.00th=[18482], 00:12:41.584 | 70.00th=[22414], 80.00th=[25035], 90.00th=[26084], 95.00th=[32637], 00:12:41.584 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:12:41.584 | 99.99th=[40109] 00:12:41.584 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:12:41.584 slat (usec): min=10, max=9169, avg=133.45, stdev=648.09 00:12:41.584 clat (usec): min=10302, max=30425, avg=17398.52, stdev=4411.18 00:12:41.584 lat (usec): min=12233, max=30451, avg=17531.97, stdev=4403.32 00:12:41.584 clat percentiles (usec): 00:12:41.584 | 1.00th=[11600], 5.00th=[13042], 10.00th=[13173], 20.00th=[13435], 00:12:41.584 | 30.00th=[14091], 40.00th=[15401], 50.00th=[16909], 60.00th=[17695], 00:12:41.584 | 70.00th=[18220], 80.00th=[20317], 90.00th=[24249], 95.00th=[27657], 00:12:41.584 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:12:41.584 | 99.99th=[30540] 00:12:41.584 bw ( KiB/s): min=12288, max=16216, per=30.50%, avg=14252.00, stdev=2777.52, samples=2 00:12:41.584 iops : min= 3072, max= 4054, avg=3563.00, stdev=694.38, samples=2 00:12:41.584 lat (msec) : 4=0.15%, 10=0.47%, 20=71.50%, 50=27.88% 00:12:41.584 cpu : usr=2.79%, sys=10.28%, ctx=298, majf=0, minf=7 00:12:41.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:41.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:41.584 issued rwts: total=3178,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:41.584 job2: (groupid=0, jobs=1): err= 0: pid=69238: Sat Sep 28 01:25:37 2024 00:12:41.584 read: IOPS=2779, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1005msec) 00:12:41.584 slat (usec): min=9, max=9598, avg=194.62, stdev=853.44 00:12:41.584 clat (usec): min=968, max=41243, avg=24858.15, stdev=5032.62 00:12:41.584 lat (usec): min=6182, max=41281, avg=25052.77, stdev=5087.02 00:12:41.584 clat percentiles (usec): 00:12:41.584 | 1.00th=[11469], 5.00th=[18744], 10.00th=[21103], 20.00th=[21890], 00:12:41.584 | 30.00th=[22152], 40.00th=[22414], 50.00th=[22676], 60.00th=[24511], 00:12:41.584 | 70.00th=[26870], 80.00th=[30540], 90.00th=[32637], 95.00th=[33424], 00:12:41.584 | 99.00th=[36963], 99.50th=[37487], 99.90th=[40109], 99.95th=[40109], 00:12:41.584 | 99.99th=[41157] 00:12:41.584 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:12:41.584 slat (usec): min=10, max=6272, avg=141.14, stdev=746.61 00:12:41.584 clat (usec): min=13644, max=37687, avg=18682.61, stdev=3494.80 00:12:41.584 lat (usec): min=13669, max=37721, avg=18823.75, stdev=3582.60 00:12:41.584 clat percentiles (usec): 00:12:41.584 | 1.00th=[13698], 5.00th=[13960], 10.00th=[14353], 20.00th=[16057], 00:12:41.584 | 30.00th=[16450], 40.00th=[17433], 50.00th=[17695], 60.00th=[18482], 00:12:41.584 | 70.00th=[20841], 80.00th=[21365], 90.00th=[22414], 95.00th=[24773], 00:12:41.584 | 99.00th=[29754], 99.50th=[32900], 99.90th=[32900], 99.95th=[33817], 00:12:41.584 | 99.99th=[37487] 00:12:41.584 bw ( KiB/s): min=12288, max=12312, per=26.33%, avg=12300.00, stdev=16.97, samples=2 00:12:41.584 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:12:41.584 lat (usec) : 1000=0.02% 00:12:41.584 lat (msec) : 10=0.38%, 20=37.89%, 50=61.72% 00:12:41.584 cpu : usr=3.39%, sys=8.57%, ctx=224, majf=0, minf=12 00:12:41.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:12:41.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:41.585 issued rwts: total=2793,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.585 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:41.585 job3: (groupid=0, jobs=1): err= 0: pid=69239: Sat Sep 28 01:25:37 2024 00:12:41.585 read: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec) 00:12:41.585 slat (usec): min=5, max=6678, avg=257.51, stdev=990.82 00:12:41.585 clat (usec): min=18829, max=46620, avg=33317.52, stdev=6390.57 00:12:41.585 lat (usec): min=22599, max=46678, avg=33575.03, stdev=6361.24 00:12:41.585 clat percentiles (usec): 00:12:41.585 | 1.00th=[22676], 5.00th=[25035], 10.00th=[26870], 20.00th=[28705], 00:12:41.585 | 30.00th=[29492], 40.00th=[29754], 50.00th=[30016], 60.00th=[32900], 00:12:41.585 | 70.00th=[36439], 80.00th=[40109], 90.00th=[44827], 95.00th=[45351], 00:12:41.585 | 99.00th=[46400], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:12:41.585 | 99.99th=[46400] 00:12:41.585 write: IOPS=2515, BW=9.83MiB/s (10.3MB/s)(9.86MiB/1003msec); 0 zone resets 00:12:41.585 slat (usec): min=10, max=7985, avg=179.86, stdev=883.30 00:12:41.585 clat (usec): min=736, max=40087, avg=22988.30, stdev=5409.52 00:12:41.585 lat (usec): min=5594, max=40118, avg=23168.16, stdev=5375.31 00:12:41.585 clat percentiles (usec): 00:12:41.585 | 1.00th=[ 6194], 5.00th=[13960], 10.00th=[14353], 20.00th=[20317], 00:12:41.585 | 30.00th=[20841], 40.00th=[21627], 50.00th=[22414], 60.00th=[23725], 00:12:41.585 | 70.00th=[25822], 80.00th=[27919], 90.00th=[28705], 95.00th=[32113], 00:12:41.585 | 99.00th=[39060], 99.50th=[39060], 99.90th=[40109], 99.95th=[40109], 00:12:41.585 | 99.99th=[40109] 00:12:41.585 bw ( KiB/s): min= 9088, max=10072, per=20.50%, avg=9580.00, stdev=695.79, samples=2 00:12:41.585 iops : min= 2272, max= 2518, avg=2395.00, stdev=173.95, samples=2 00:12:41.585 lat (usec) : 750=0.02% 00:12:41.585 lat (msec) : 10=0.70%, 20=8.66%, 50=90.61% 00:12:41.585 cpu : usr=1.50%, sys=8.08%, ctx=211, majf=0, minf=21 00:12:41.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:12:41.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:41.585 issued rwts: total=2048,2523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.585 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:41.585 00:12:41.585 Run status group 0 (all jobs): 00:12:41.585 READ: bw=39.2MiB/s (41.1MB/s), 8167KiB/s-12.4MiB/s (8364kB/s-13.0MB/s), io=39.4MiB (41.3MB), run=1003-1005msec 00:12:41.585 WRITE: bw=45.6MiB/s (47.8MB/s), 9.83MiB/s-14.0MiB/s (10.3MB/s-14.6MB/s), io=45.9MiB (48.1MB), run=1003-1005msec 00:12:41.585 00:12:41.585 Disk stats (read/write): 00:12:41.585 nvme0n1: ios=2097/2087, merge=0/0, ticks=15654/18390, in_queue=34044, util=87.34% 00:12:41.585 nvme0n2: ios=2593/3017, merge=0/0, ticks=13233/11984, in_queue=25217, util=88.07% 00:12:41.585 nvme0n3: ios=2424/2560, merge=0/0, ticks=20068/13142, in_queue=33210, util=89.16% 00:12:41.585 nvme0n4: ios=1850/2048, merge=0/0, ticks=15418/10427, in_queue=25845, util=89.40% 00:12:41.585 01:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:41.585 01:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=69258 00:12:41.585 01:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:41.585 01:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:41.585 [global] 00:12:41.585 thread=1 00:12:41.585 invalidate=1 00:12:41.585 rw=read 00:12:41.585 time_based=1 00:12:41.585 runtime=10 00:12:41.585 ioengine=libaio 00:12:41.585 direct=1 00:12:41.585 bs=4096 00:12:41.585 iodepth=1 00:12:41.585 norandommap=1 00:12:41.585 numjobs=1 00:12:41.585 00:12:41.585 [job0] 00:12:41.585 filename=/dev/nvme0n1 00:12:41.585 [job1] 00:12:41.585 filename=/dev/nvme0n2 00:12:41.585 [job2] 00:12:41.585 filename=/dev/nvme0n3 00:12:41.585 [job3] 00:12:41.585 filename=/dev/nvme0n4 00:12:41.585 Could not set queue depth (nvme0n1) 00:12:41.585 Could not set queue depth (nvme0n2) 00:12:41.585 Could not set queue depth (nvme0n3) 00:12:41.585 Could not set queue depth (nvme0n4) 00:12:41.585 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:41.585 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:41.585 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:41.585 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:41.585 fio-3.35 00:12:41.585 Starting 4 threads 00:12:44.874 01:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:44.874 fio: pid=69306, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:44.875 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=25657344, buflen=4096 00:12:44.875 01:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:45.148 fio: pid=69305, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:45.148 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=28602368, buflen=4096 00:12:45.148 01:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:45.148 01:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:45.425 fio: pid=69303, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:45.425 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=31383552, buflen=4096 00:12:45.425 01:25:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:45.425 01:25:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:45.684 fio: pid=69304, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:45.684 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4018176, buflen=4096 00:12:45.684 00:12:45.684 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69303: Sat Sep 28 01:25:41 2024 00:12:45.684 read: IOPS=2159, BW=8638KiB/s (8845kB/s)(29.9MiB/3548msec) 00:12:45.684 slat (usec): min=10, max=12268, avg=26.52, stdev=233.44 00:12:45.684 clat (usec): min=164, max=4530, avg=434.37, stdev=105.27 00:12:45.684 lat (usec): min=179, max=12533, avg=460.88, stdev=254.79 00:12:45.684 clat percentiles (usec): 00:12:45.684 | 1.00th=[ 208], 5.00th=[ 265], 10.00th=[ 293], 20.00th=[ 396], 00:12:45.684 | 30.00th=[ 412], 40.00th=[ 429], 50.00th=[ 445], 60.00th=[ 457], 00:12:45.684 | 70.00th=[ 474], 80.00th=[ 490], 90.00th=[ 515], 95.00th=[ 537], 00:12:45.684 | 99.00th=[ 652], 99.50th=[ 668], 99.90th=[ 1270], 99.95th=[ 1745], 00:12:45.684 | 99.99th=[ 4555] 00:12:45.684 bw ( KiB/s): min= 8232, max= 8592, per=21.55%, avg=8349.33, stdev=133.93, samples=6 00:12:45.684 iops : min= 2058, max= 2148, avg=2087.33, stdev=33.48, samples=6 00:12:45.684 lat (usec) : 250=1.96%, 500=83.44%, 750=14.46%, 1000=0.01% 00:12:45.684 lat (msec) : 2=0.08%, 4=0.03%, 10=0.01% 00:12:45.684 cpu : usr=1.07%, sys=4.26%, ctx=7674, majf=0, minf=1 00:12:45.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:45.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.684 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.684 issued rwts: total=7663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:45.684 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69304: Sat Sep 28 01:25:41 2024 00:12:45.684 read: IOPS=4395, BW=17.2MiB/s (18.0MB/s)(67.8MiB/3951msec) 00:12:45.684 slat (usec): min=9, max=14390, avg=18.98, stdev=231.87 00:12:45.684 clat (usec): min=148, max=2384, avg=207.18, stdev=52.54 00:12:45.684 lat (usec): min=162, max=14903, avg=226.16, stdev=240.47 00:12:45.684 clat percentiles (usec): 00:12:45.684 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:12:45.684 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 196], 60.00th=[ 202], 00:12:45.684 | 70.00th=[ 210], 80.00th=[ 225], 90.00th=[ 253], 95.00th=[ 281], 00:12:45.684 | 99.00th=[ 416], 99.50th=[ 437], 99.90th=[ 603], 99.95th=[ 824], 00:12:45.684 | 99.99th=[ 1582] 00:12:45.684 bw ( KiB/s): min=11394, max=19256, per=45.15%, avg=17496.29, stdev=2722.00, samples=7 00:12:45.684 iops : min= 2848, max= 4814, avg=4374.00, stdev=680.69, samples=7 00:12:45.684 lat (usec) : 250=89.37%, 500=10.46%, 750=0.12%, 1000=0.03% 00:12:45.684 lat (msec) : 2=0.02%, 4=0.01% 00:12:45.684 cpu : usr=1.09%, sys=6.05%, ctx=17378, majf=0, minf=2 00:12:45.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:45.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.684 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.684 issued rwts: total=17366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:45.684 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69305: Sat Sep 28 01:25:41 2024 00:12:45.684 read: IOPS=2122, BW=8487KiB/s (8691kB/s)(27.3MiB/3291msec) 00:12:45.685 slat (usec): min=9, max=7574, avg=31.54, stdev=125.99 00:12:45.685 clat (usec): min=185, max=2664, avg=436.02, stdev=78.52 00:12:45.685 lat (usec): min=201, max=7871, avg=467.56, stdev=147.90 00:12:45.685 clat percentiles (usec): 00:12:45.685 | 1.00th=[ 215], 5.00th=[ 355], 10.00th=[ 379], 20.00th=[ 400], 00:12:45.685 | 30.00th=[ 412], 40.00th=[ 429], 50.00th=[ 441], 60.00th=[ 449], 00:12:45.685 | 70.00th=[ 461], 80.00th=[ 478], 90.00th=[ 498], 95.00th=[ 510], 00:12:45.685 | 99.00th=[ 545], 99.50th=[ 586], 99.90th=[ 1254], 99.95th=[ 2311], 00:12:45.685 | 99.99th=[ 2671] 00:12:45.685 bw ( KiB/s): min= 8240, max= 8576, per=21.57%, avg=8358.67, stdev=120.20, samples=6 00:12:45.685 iops : min= 2060, max= 2144, avg=2089.67, stdev=30.05, samples=6 00:12:45.685 lat (usec) : 250=1.92%, 500=90.02%, 750=7.92%, 1000=0.03% 00:12:45.685 lat (msec) : 2=0.04%, 4=0.06% 00:12:45.685 cpu : usr=2.22%, sys=5.56%, ctx=6991, majf=0, minf=1 00:12:45.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:45.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.685 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.685 issued rwts: total=6984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:45.685 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69306: Sat Sep 28 01:25:41 2024 00:12:45.685 read: IOPS=2082, BW=8327KiB/s (8527kB/s)(24.5MiB/3009msec) 00:12:45.685 slat (usec): min=10, max=114, avg=22.21, stdev= 8.00 00:12:45.685 clat (usec): min=273, max=3337, avg=455.38, stdev=66.35 00:12:45.685 lat (usec): min=301, max=3351, avg=477.59, stdev=66.65 00:12:45.685 clat percentiles (usec): 00:12:45.685 | 1.00th=[ 371], 5.00th=[ 392], 10.00th=[ 404], 20.00th=[ 416], 00:12:45.685 | 30.00th=[ 429], 40.00th=[ 441], 50.00th=[ 453], 60.00th=[ 461], 00:12:45.685 | 70.00th=[ 474], 80.00th=[ 490], 90.00th=[ 510], 95.00th=[ 529], 00:12:45.685 | 99.00th=[ 553], 99.50th=[ 586], 99.90th=[ 660], 99.95th=[ 1270], 00:12:45.685 | 99.99th=[ 3326] 00:12:45.685 bw ( KiB/s): min= 8232, max= 8592, per=21.55%, avg=8349.33, stdev=132.87, samples=6 00:12:45.685 iops : min= 2058, max= 2148, avg=2087.33, stdev=33.22, samples=6 00:12:45.685 lat (usec) : 500=85.30%, 750=14.59%, 1000=0.02% 00:12:45.685 lat (msec) : 2=0.05%, 4=0.03% 00:12:45.685 cpu : usr=1.20%, sys=4.26%, ctx=6268, majf=0, minf=2 00:12:45.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:45.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.685 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.685 issued rwts: total=6265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:45.685 00:12:45.685 Run status group 0 (all jobs): 00:12:45.685 READ: bw=37.8MiB/s (39.7MB/s), 8327KiB/s-17.2MiB/s (8527kB/s-18.0MB/s), io=150MiB (157MB), run=3009-3951msec 00:12:45.685 00:12:45.685 Disk stats (read/write): 00:12:45.685 nvme0n1: ios=7051/0, merge=0/0, ticks=3182/0, in_queue=3182, util=95.31% 00:12:45.685 nvme0n2: ios=16916/0, merge=0/0, ticks=3645/0, in_queue=3645, util=95.28% 00:12:45.685 nvme0n3: ios=6550/0, merge=0/0, ticks=2979/0, in_queue=2979, util=96.46% 00:12:45.685 nvme0n4: ios=5976/0, merge=0/0, ticks=2720/0, in_queue=2720, util=96.76% 00:12:45.942 01:25:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:45.942 01:25:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:46.200 01:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:46.200 01:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:46.767 01:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:46.767 01:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:47.026 01:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:47.026 01:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:47.593 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:47.593 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:47.851 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:47.851 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 69258 00:12:47.851 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:47.851 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.851 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.851 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:47.851 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:47.851 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.851 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.851 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:47.851 nvmf hotplug test: fio failed as expected 00:12:47.851 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:47.851 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:47.851 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:47.851 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.109 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:48.110 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:48.110 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:48.110 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:48.110 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:48.110 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:48.110 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:48.110 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:48.110 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:48.110 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:48.110 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:48.110 rmmod nvme_tcp 00:12:48.110 rmmod nvme_fabrics 00:12:48.110 rmmod nvme_keyring 00:12:48.368 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.368 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:48.368 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:48.368 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 68865 ']' 00:12:48.368 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 68865 00:12:48.368 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 68865 ']' 00:12:48.368 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 68865 00:12:48.368 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:12:48.368 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:48.368 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68865 00:12:48.368 killing process with pid 68865 00:12:48.368 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:48.368 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:48.368 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68865' 00:12:48.368 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 68865 00:12:48.368 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 68865 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:49.304 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:49.564 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:49.564 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:49.564 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:49.564 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:49.564 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:49.564 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.564 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.564 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.564 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:12:49.564 ************************************ 00:12:49.564 END TEST nvmf_fio_target 00:12:49.564 ************************************ 00:12:49.564 00:12:49.564 real 0m22.311s 00:12:49.564 user 1m21.968s 00:12:49.564 sys 0m10.404s 00:12:49.564 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.564 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.564 01:25:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:49.564 01:25:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:49.564 01:25:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:49.564 01:25:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:49.564 ************************************ 00:12:49.564 START TEST nvmf_bdevio 00:12:49.564 ************************************ 00:12:49.564 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:49.823 * Looking for test storage... 00:12:49.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:49.823 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:49.823 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:12:49.823 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:49.823 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:49.823 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.823 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.823 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.823 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.823 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.823 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.823 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.823 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.823 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.823 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.823 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.823 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:49.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.824 --rc genhtml_branch_coverage=1 00:12:49.824 --rc genhtml_function_coverage=1 00:12:49.824 --rc genhtml_legend=1 00:12:49.824 --rc geninfo_all_blocks=1 00:12:49.824 --rc geninfo_unexecuted_blocks=1 00:12:49.824 00:12:49.824 ' 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:49.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.824 --rc genhtml_branch_coverage=1 00:12:49.824 --rc genhtml_function_coverage=1 00:12:49.824 --rc genhtml_legend=1 00:12:49.824 --rc geninfo_all_blocks=1 00:12:49.824 --rc geninfo_unexecuted_blocks=1 00:12:49.824 00:12:49.824 ' 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:49.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.824 --rc genhtml_branch_coverage=1 00:12:49.824 --rc genhtml_function_coverage=1 00:12:49.824 --rc genhtml_legend=1 00:12:49.824 --rc geninfo_all_blocks=1 00:12:49.824 --rc geninfo_unexecuted_blocks=1 00:12:49.824 00:12:49.824 ' 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:49.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.824 --rc genhtml_branch_coverage=1 00:12:49.824 --rc genhtml_function_coverage=1 00:12:49.824 --rc genhtml_legend=1 00:12:49.824 --rc geninfo_all_blocks=1 00:12:49.824 --rc geninfo_unexecuted_blocks=1 00:12:49.824 00:12:49.824 ' 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:49.824 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.824 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:49.825 Cannot find device "nvmf_init_br" 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:49.825 Cannot find device "nvmf_init_br2" 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:49.825 Cannot find device "nvmf_tgt_br" 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:49.825 Cannot find device "nvmf_tgt_br2" 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:49.825 Cannot find device "nvmf_init_br" 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:49.825 Cannot find device "nvmf_init_br2" 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:12:49.825 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:50.088 Cannot find device "nvmf_tgt_br" 00:12:50.088 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:12:50.088 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:50.088 Cannot find device "nvmf_tgt_br2" 00:12:50.088 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:12:50.088 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:50.088 Cannot find device "nvmf_br" 00:12:50.088 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:12:50.088 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:50.089 Cannot find device "nvmf_init_if" 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:50.089 Cannot find device "nvmf_init_if2" 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:50.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:50.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:50.089 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:50.089 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:50.089 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:50.089 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:50.089 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:50.356 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:50.356 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:12:50.356 00:12:50.356 --- 10.0.0.3 ping statistics --- 00:12:50.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.356 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:50.356 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:50.356 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:12:50.356 00:12:50.356 --- 10.0.0.4 ping statistics --- 00:12:50.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.356 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:50.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:12:50.356 00:12:50.356 --- 10.0.0.1 ping statistics --- 00:12:50.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.356 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:50.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:12:50.356 00:12:50.356 --- 10.0.0.2 ping statistics --- 00:12:50.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.356 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # return 0 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=69638 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 69638 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 69638 ']' 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:50.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:50.356 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.356 [2024-09-28 01:25:46.258592] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:50.356 [2024-09-28 01:25:46.258775] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.615 [2024-09-28 01:25:46.439909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.874 [2024-09-28 01:25:46.677355] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.874 [2024-09-28 01:25:46.677473] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.874 [2024-09-28 01:25:46.677513] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.874 [2024-09-28 01:25:46.677529] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.874 [2024-09-28 01:25:46.677544] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.874 [2024-09-28 01:25:46.677769] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:12:50.874 [2024-09-28 01:25:46.677946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:12:50.874 [2024-09-28 01:25:46.678025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:50.874 [2024-09-28 01:25:46.678051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:12:51.131 [2024-09-28 01:25:46.860649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:51.390 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:51.390 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:12:51.390 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:51.390 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:51.390 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.390 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.390 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:51.390 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.390 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.390 [2024-09-28 01:25:47.289333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.390 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.390 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:51.390 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.390 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.649 Malloc0 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.649 [2024-09-28 01:25:47.390133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:12:51.649 { 00:12:51.649 "params": { 00:12:51.649 "name": "Nvme$subsystem", 00:12:51.649 "trtype": "$TEST_TRANSPORT", 00:12:51.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.649 "adrfam": "ipv4", 00:12:51.649 "trsvcid": "$NVMF_PORT", 00:12:51.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.649 "hdgst": ${hdgst:-false}, 00:12:51.649 "ddgst": ${ddgst:-false} 00:12:51.649 }, 00:12:51.649 "method": "bdev_nvme_attach_controller" 00:12:51.649 } 00:12:51.649 EOF 00:12:51.649 )") 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:12:51.649 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:12:51.649 "params": { 00:12:51.649 "name": "Nvme1", 00:12:51.649 "trtype": "tcp", 00:12:51.649 "traddr": "10.0.0.3", 00:12:51.649 "adrfam": "ipv4", 00:12:51.649 "trsvcid": "4420", 00:12:51.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.649 "hdgst": false, 00:12:51.649 "ddgst": false 00:12:51.649 }, 00:12:51.649 "method": "bdev_nvme_attach_controller" 00:12:51.649 }' 00:12:51.649 [2024-09-28 01:25:47.505481] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:51.649 [2024-09-28 01:25:47.505639] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69677 ] 00:12:51.908 [2024-09-28 01:25:47.680243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:52.167 [2024-09-28 01:25:47.908993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.167 [2024-09-28 01:25:47.909132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.167 [2024-09-28 01:25:47.909345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.167 [2024-09-28 01:25:48.093897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:52.425 I/O targets: 00:12:52.425 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:52.425 00:12:52.425 00:12:52.425 CUnit - A unit testing framework for C - Version 2.1-3 00:12:52.425 http://cunit.sourceforge.net/ 00:12:52.425 00:12:52.425 00:12:52.425 Suite: bdevio tests on: Nvme1n1 00:12:52.425 Test: blockdev write read block ...passed 00:12:52.425 Test: blockdev write zeroes read block ...passed 00:12:52.425 Test: blockdev write zeroes read no split ...passed 00:12:52.425 Test: blockdev write zeroes read split ...passed 00:12:52.425 Test: blockdev write zeroes read split partial ...passed 00:12:52.425 Test: blockdev reset ...[2024-09-28 01:25:48.343977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:52.425 [2024-09-28 01:25:48.344173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:12:52.425 [2024-09-28 01:25:48.357627] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:52.425 passed 00:12:52.684 Test: blockdev write read 8 blocks ...passed 00:12:52.684 Test: blockdev write read size > 128k ...passed 00:12:52.684 Test: blockdev write read invalid size ...passed 00:12:52.684 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:52.684 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:52.684 Test: blockdev write read max offset ...passed 00:12:52.684 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:52.684 Test: blockdev writev readv 8 blocks ...passed 00:12:52.684 Test: blockdev writev readv 30 x 1block ...passed 00:12:52.684 Test: blockdev writev readv block ...passed 00:12:52.684 Test: blockdev writev readv size > 128k ...passed 00:12:52.684 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:52.684 Test: blockdev comparev and writev ...[2024-09-28 01:25:48.371492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.684 [2024-09-28 01:25:48.371578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:52.684 [2024-09-28 01:25:48.371612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.684 [2024-09-28 01:25:48.371633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:52.684 [2024-09-28 01:25:48.372014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.684 [2024-09-28 01:25:48.372050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:52.684 [2024-09-28 01:25:48.372074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.684 [2024-09-28 01:25:48.372094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:52.684 [2024-09-28 01:25:48.372434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.684 [2024-09-28 01:25:48.372498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:52.684 [2024-09-28 01:25:48.372542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.684 [2024-09-28 01:25:48.373012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:52.684 [2024-09-28 01:25:48.373674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.684 [2024-09-28 01:25:48.373720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:52.684 [2024-09-28 01:25:48.373748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.684 [2024-09-28 01:25:48.373768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:52.684 passed 00:12:52.684 Test: blockdev nvme passthru rw ...passed 00:12:52.684 Test: blockdev nvme passthru vendor specific ...[2024-09-28 01:25:48.375122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:52.684 [2024-09-28 01:25:48.375288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:52.684 [2024-09-28 01:25:48.375546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:52.684 [2024-09-28 01:25:48.375584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:52.684 [2024-09-28 01:25:48.375792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:52.684 [2024-09-28 01:25:48.375913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:52.684 passed 00:12:52.684 Test: blockdev nvme admin passthru ...[2024-09-28 01:25:48.376091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:52.684 [2024-09-28 01:25:48.376126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:52.684 passed 00:12:52.684 Test: blockdev copy ...passed 00:12:52.684 00:12:52.684 Run Summary: Type Total Ran Passed Failed Inactive 00:12:52.684 suites 1 1 n/a 0 0 00:12:52.684 tests 23 23 23 0 0 00:12:52.684 asserts 152 152 152 0 n/a 00:12:52.684 00:12:52.685 Elapsed time = 0.282 seconds 00:12:53.620 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.620 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.620 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:53.620 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.620 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:53.620 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:53.620 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:53.620 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:53.620 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:53.620 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:53.620 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:53.620 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:53.905 rmmod nvme_tcp 00:12:53.905 rmmod nvme_fabrics 00:12:53.905 rmmod nvme_keyring 00:12:53.905 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:53.905 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:53.905 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:53.905 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 69638 ']' 00:12:53.905 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 69638 00:12:53.905 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 69638 ']' 00:12:53.905 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 69638 00:12:53.905 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:12:53.905 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.905 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69638 00:12:53.905 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:53.905 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:53.905 killing process with pid 69638 00:12:53.905 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69638' 00:12:53.905 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 69638 00:12:53.905 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 69638 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:55.285 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:55.285 01:25:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:55.285 01:25:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:55.285 01:25:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:55.285 01:25:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:55.285 01:25:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:55.285 01:25:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.285 01:25:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.285 01:25:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.285 01:25:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:12:55.285 00:12:55.285 real 0m5.683s 00:12:55.285 user 0m20.074s 00:12:55.285 sys 0m1.089s 00:12:55.285 01:25:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:55.285 ************************************ 00:12:55.285 END TEST nvmf_bdevio 00:12:55.285 ************************************ 00:12:55.285 01:25:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:55.285 01:25:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:55.285 00:12:55.285 real 2m59.614s 00:12:55.285 user 7m58.368s 00:12:55.285 sys 0m54.297s 00:12:55.285 01:25:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:55.285 01:25:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:55.285 ************************************ 00:12:55.285 END TEST nvmf_target_core 00:12:55.285 ************************************ 00:12:55.285 01:25:51 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:55.285 01:25:51 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:55.285 01:25:51 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.285 01:25:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:55.545 ************************************ 00:12:55.545 START TEST nvmf_target_extra 00:12:55.545 ************************************ 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:55.545 * Looking for test storage... 00:12:55.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.545 01:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:55.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.545 --rc genhtml_branch_coverage=1 00:12:55.546 --rc genhtml_function_coverage=1 00:12:55.546 --rc genhtml_legend=1 00:12:55.546 --rc geninfo_all_blocks=1 00:12:55.546 --rc geninfo_unexecuted_blocks=1 00:12:55.546 00:12:55.546 ' 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:55.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.546 --rc genhtml_branch_coverage=1 00:12:55.546 --rc genhtml_function_coverage=1 00:12:55.546 --rc genhtml_legend=1 00:12:55.546 --rc geninfo_all_blocks=1 00:12:55.546 --rc geninfo_unexecuted_blocks=1 00:12:55.546 00:12:55.546 ' 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:55.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.546 --rc genhtml_branch_coverage=1 00:12:55.546 --rc genhtml_function_coverage=1 00:12:55.546 --rc genhtml_legend=1 00:12:55.546 --rc geninfo_all_blocks=1 00:12:55.546 --rc geninfo_unexecuted_blocks=1 00:12:55.546 00:12:55.546 ' 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:55.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.546 --rc genhtml_branch_coverage=1 00:12:55.546 --rc genhtml_function_coverage=1 00:12:55.546 --rc genhtml_legend=1 00:12:55.546 --rc geninfo_all_blocks=1 00:12:55.546 --rc geninfo_unexecuted_blocks=1 00:12:55.546 00:12:55.546 ' 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:55.546 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.546 ************************************ 00:12:55.546 START TEST nvmf_auth_target 00:12:55.546 ************************************ 00:12:55.546 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:55.806 * Looking for test storage... 00:12:55.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:55.806 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:55.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.807 --rc genhtml_branch_coverage=1 00:12:55.807 --rc genhtml_function_coverage=1 00:12:55.807 --rc genhtml_legend=1 00:12:55.807 --rc geninfo_all_blocks=1 00:12:55.807 --rc geninfo_unexecuted_blocks=1 00:12:55.807 00:12:55.807 ' 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:55.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.807 --rc genhtml_branch_coverage=1 00:12:55.807 --rc genhtml_function_coverage=1 00:12:55.807 --rc genhtml_legend=1 00:12:55.807 --rc geninfo_all_blocks=1 00:12:55.807 --rc geninfo_unexecuted_blocks=1 00:12:55.807 00:12:55.807 ' 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:55.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.807 --rc genhtml_branch_coverage=1 00:12:55.807 --rc genhtml_function_coverage=1 00:12:55.807 --rc genhtml_legend=1 00:12:55.807 --rc geninfo_all_blocks=1 00:12:55.807 --rc geninfo_unexecuted_blocks=1 00:12:55.807 00:12:55.807 ' 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:55.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.807 --rc genhtml_branch_coverage=1 00:12:55.807 --rc genhtml_function_coverage=1 00:12:55.807 --rc genhtml_legend=1 00:12:55.807 --rc geninfo_all_blocks=1 00:12:55.807 --rc geninfo_unexecuted_blocks=1 00:12:55.807 00:12:55.807 ' 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:55.807 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:55.807 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:55.808 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:55.808 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:55.808 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:55.808 Cannot find device "nvmf_init_br" 00:12:55.808 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:55.808 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:55.808 Cannot find device "nvmf_init_br2" 00:12:55.808 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:55.808 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:55.808 Cannot find device "nvmf_tgt_br" 00:12:55.808 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:12:55.808 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:55.808 Cannot find device "nvmf_tgt_br2" 00:12:55.808 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:12:55.808 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:55.808 Cannot find device "nvmf_init_br" 00:12:55.808 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:12:55.808 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:55.808 Cannot find device "nvmf_init_br2" 00:12:55.808 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:12:55.808 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:55.808 Cannot find device "nvmf_tgt_br" 00:12:55.808 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:12:55.808 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:56.067 Cannot find device "nvmf_tgt_br2" 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:56.067 Cannot find device "nvmf_br" 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:56.067 Cannot find device "nvmf_init_if" 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:56.067 Cannot find device "nvmf_init_if2" 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:56.067 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:56.067 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:56.067 01:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:56.327 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:56.327 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:12:56.327 00:12:56.327 --- 10.0.0.3 ping statistics --- 00:12:56.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.327 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:56.327 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:56.327 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:12:56.327 00:12:56.327 --- 10.0.0.4 ping statistics --- 00:12:56.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.327 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:56.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:12:56.327 00:12:56.327 --- 10.0.0.1 ping statistics --- 00:12:56.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.327 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:56.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:12:56.327 00:12:56.327 --- 10.0.0.2 ping statistics --- 00:12:56.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.327 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # return 0 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=70019 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 70019 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70019 ']' 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:56.327 01:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.265 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:57.265 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:57.265 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:57.265 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:57.265 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=70052 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=5efceac6dd142f88b1a2acb2a010dc46d77dd365971c9169 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.GtR 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 5efceac6dd142f88b1a2acb2a010dc46d77dd365971c9169 0 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 5efceac6dd142f88b1a2acb2a010dc46d77dd365971c9169 0 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=5efceac6dd142f88b1a2acb2a010dc46d77dd365971c9169 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.GtR 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.GtR 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.GtR 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=1b1998e0989f03fed3675056da4686c993c6eb5b6cc0016b0d45abd1ec82485d 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.fPj 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 1b1998e0989f03fed3675056da4686c993c6eb5b6cc0016b0d45abd1ec82485d 3 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 1b1998e0989f03fed3675056da4686c993c6eb5b6cc0016b0d45abd1ec82485d 3 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=1b1998e0989f03fed3675056da4686c993c6eb5b6cc0016b0d45abd1ec82485d 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.fPj 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.fPj 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.fPj 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=1d8203f3d6d3b4beddb075c9caf4ed52 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.dCe 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 1d8203f3d6d3b4beddb075c9caf4ed52 1 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 1d8203f3d6d3b4beddb075c9caf4ed52 1 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=1d8203f3d6d3b4beddb075c9caf4ed52 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.dCe 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.dCe 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.dCe 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=62083928a28f087573ca53e6a681935506fb5f83374cd366 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.kfP 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 62083928a28f087573ca53e6a681935506fb5f83374cd366 2 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 62083928a28f087573ca53e6a681935506fb5f83374cd366 2 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:57.524 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:57.525 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=62083928a28f087573ca53e6a681935506fb5f83374cd366 00:12:57.525 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:12:57.525 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.kfP 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.kfP 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.kfP 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=de024d09bc7c72a8203dbc0ff29d77071ca4d068f681146c 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.xbH 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key de024d09bc7c72a8203dbc0ff29d77071ca4d068f681146c 2 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 de024d09bc7c72a8203dbc0ff29d77071ca4d068f681146c 2 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=de024d09bc7c72a8203dbc0ff29d77071ca4d068f681146c 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.xbH 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.xbH 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.xbH 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=0ddbc99c734566144fd650253a13cccf 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.t4Y 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 0ddbc99c734566144fd650253a13cccf 1 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 0ddbc99c734566144fd650253a13cccf 1 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=0ddbc99c734566144fd650253a13cccf 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.t4Y 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.t4Y 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.t4Y 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:57.784 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=bef08b57fc58339c084d12f2327325f8ad8329215abd847886b11534caea3ffe 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.xPJ 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key bef08b57fc58339c084d12f2327325f8ad8329215abd847886b11534caea3ffe 3 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 bef08b57fc58339c084d12f2327325f8ad8329215abd847886b11534caea3ffe 3 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=bef08b57fc58339c084d12f2327325f8ad8329215abd847886b11534caea3ffe 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.xPJ 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.xPJ 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.xPJ 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 70019 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70019 ']' 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:57.785 01:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.353 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:58.353 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:58.353 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 70052 /var/tmp/host.sock 00:12:58.353 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70052 ']' 00:12:58.353 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:12:58.353 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:58.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:58.353 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:58.353 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:58.353 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.612 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:58.612 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:58.612 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:58.612 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.612 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.612 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.612 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:58.612 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GtR 00:12:58.612 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.612 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.612 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.612 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.GtR 00:12:58.612 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.GtR 00:12:58.871 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.fPj ]] 00:12:58.871 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fPj 00:12:58.872 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.872 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.872 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.872 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fPj 00:12:58.872 01:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fPj 00:12:59.440 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:59.440 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dCe 00:12:59.440 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.440 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.440 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.440 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dCe 00:12:59.440 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dCe 00:12:59.440 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.kfP ]] 00:12:59.440 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kfP 00:12:59.440 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.440 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.440 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.440 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kfP 00:12:59.440 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kfP 00:12:59.699 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:59.700 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xbH 00:12:59.700 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.700 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.700 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.700 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.xbH 00:12:59.700 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.xbH 00:12:59.958 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.t4Y ]] 00:12:59.958 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t4Y 00:12:59.958 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.958 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.958 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.958 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t4Y 00:12:59.958 01:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t4Y 00:13:00.217 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:00.217 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.xPJ 00:13:00.217 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.217 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.217 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.217 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.xPJ 00:13:00.217 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.xPJ 00:13:00.476 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:13:00.476 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:00.476 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:00.476 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.476 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:00.476 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:00.735 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:13:00.735 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.735 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:00.735 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:00.735 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:00.735 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.735 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.735 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.735 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.735 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.735 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.735 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.736 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.303 00:13:01.303 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.303 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.303 01:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.563 01:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.563 01:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.563 01:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.563 01:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.563 01:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.563 01:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.563 { 00:13:01.563 "cntlid": 1, 00:13:01.563 "qid": 0, 00:13:01.563 "state": "enabled", 00:13:01.563 "thread": "nvmf_tgt_poll_group_000", 00:13:01.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:01.563 "listen_address": { 00:13:01.563 "trtype": "TCP", 00:13:01.563 "adrfam": "IPv4", 00:13:01.563 "traddr": "10.0.0.3", 00:13:01.563 "trsvcid": "4420" 00:13:01.563 }, 00:13:01.563 "peer_address": { 00:13:01.563 "trtype": "TCP", 00:13:01.563 "adrfam": "IPv4", 00:13:01.563 "traddr": "10.0.0.1", 00:13:01.563 "trsvcid": "46828" 00:13:01.563 }, 00:13:01.563 "auth": { 00:13:01.563 "state": "completed", 00:13:01.563 "digest": "sha256", 00:13:01.563 "dhgroup": "null" 00:13:01.563 } 00:13:01.563 } 00:13:01.563 ]' 00:13:01.563 01:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.563 01:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:01.563 01:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.563 01:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:01.563 01:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.563 01:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.563 01:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.563 01:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.823 01:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:13:01.823 01:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:13:06.038 01:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.038 01:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:06.038 01:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.038 01:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.038 01:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.038 01:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:06.038 01:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:06.038 01:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:06.297 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:13:06.297 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.297 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:06.297 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:06.297 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:06.297 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.297 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.297 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.297 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.297 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.297 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.297 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.297 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.864 00:13:06.864 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.864 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.864 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.123 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.123 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.123 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.123 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.123 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.123 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.123 { 00:13:07.123 "cntlid": 3, 00:13:07.123 "qid": 0, 00:13:07.123 "state": "enabled", 00:13:07.123 "thread": "nvmf_tgt_poll_group_000", 00:13:07.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:07.123 "listen_address": { 00:13:07.123 "trtype": "TCP", 00:13:07.123 "adrfam": "IPv4", 00:13:07.123 "traddr": "10.0.0.3", 00:13:07.123 "trsvcid": "4420" 00:13:07.123 }, 00:13:07.123 "peer_address": { 00:13:07.123 "trtype": "TCP", 00:13:07.123 "adrfam": "IPv4", 00:13:07.123 "traddr": "10.0.0.1", 00:13:07.123 "trsvcid": "51544" 00:13:07.123 }, 00:13:07.123 "auth": { 00:13:07.123 "state": "completed", 00:13:07.123 "digest": "sha256", 00:13:07.123 "dhgroup": "null" 00:13:07.123 } 00:13:07.123 } 00:13:07.123 ]' 00:13:07.123 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.123 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:07.123 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.123 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:07.123 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.123 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.123 01:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.123 01:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.381 01:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:13:07.381 01:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:13:08.315 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.315 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:08.315 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.315 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.315 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.315 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.315 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:08.315 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:08.573 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:13:08.573 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.573 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:08.573 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:08.573 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:08.573 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.573 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.573 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.573 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.573 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.573 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.573 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.573 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.832 00:13:08.832 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.832 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.832 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.092 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.092 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.092 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.092 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.092 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.092 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.092 { 00:13:09.092 "cntlid": 5, 00:13:09.092 "qid": 0, 00:13:09.092 "state": "enabled", 00:13:09.092 "thread": "nvmf_tgt_poll_group_000", 00:13:09.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:09.092 "listen_address": { 00:13:09.092 "trtype": "TCP", 00:13:09.092 "adrfam": "IPv4", 00:13:09.092 "traddr": "10.0.0.3", 00:13:09.092 "trsvcid": "4420" 00:13:09.092 }, 00:13:09.092 "peer_address": { 00:13:09.092 "trtype": "TCP", 00:13:09.092 "adrfam": "IPv4", 00:13:09.092 "traddr": "10.0.0.1", 00:13:09.092 "trsvcid": "51572" 00:13:09.092 }, 00:13:09.092 "auth": { 00:13:09.092 "state": "completed", 00:13:09.092 "digest": "sha256", 00:13:09.092 "dhgroup": "null" 00:13:09.092 } 00:13:09.092 } 00:13:09.092 ]' 00:13:09.092 01:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.351 01:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:09.351 01:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.351 01:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:09.351 01:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.351 01:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.351 01:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.351 01:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.610 01:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:13:09.610 01:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:13:10.178 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.178 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:10.178 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.178 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.178 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.178 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.178 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:10.178 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:10.438 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:13:10.438 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.438 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:10.438 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:10.438 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:10.438 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.438 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:13:10.438 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.438 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.438 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.438 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:10.438 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:10.438 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:11.005 00:13:11.005 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.005 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.005 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.270 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.270 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.270 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.270 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.270 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.270 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.270 { 00:13:11.270 "cntlid": 7, 00:13:11.270 "qid": 0, 00:13:11.270 "state": "enabled", 00:13:11.270 "thread": "nvmf_tgt_poll_group_000", 00:13:11.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:11.270 "listen_address": { 00:13:11.270 "trtype": "TCP", 00:13:11.270 "adrfam": "IPv4", 00:13:11.270 "traddr": "10.0.0.3", 00:13:11.270 "trsvcid": "4420" 00:13:11.270 }, 00:13:11.270 "peer_address": { 00:13:11.270 "trtype": "TCP", 00:13:11.270 "adrfam": "IPv4", 00:13:11.270 "traddr": "10.0.0.1", 00:13:11.270 "trsvcid": "51594" 00:13:11.270 }, 00:13:11.270 "auth": { 00:13:11.270 "state": "completed", 00:13:11.270 "digest": "sha256", 00:13:11.270 "dhgroup": "null" 00:13:11.270 } 00:13:11.270 } 00:13:11.270 ]' 00:13:11.270 01:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.270 01:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:11.270 01:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.270 01:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:11.270 01:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.270 01:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.271 01:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.271 01:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.530 01:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:13:11.530 01:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.500 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.785 00:13:12.785 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.785 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.785 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.044 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.044 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.044 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.044 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.044 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.044 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.044 { 00:13:13.044 "cntlid": 9, 00:13:13.044 "qid": 0, 00:13:13.044 "state": "enabled", 00:13:13.044 "thread": "nvmf_tgt_poll_group_000", 00:13:13.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:13.044 "listen_address": { 00:13:13.044 "trtype": "TCP", 00:13:13.044 "adrfam": "IPv4", 00:13:13.044 "traddr": "10.0.0.3", 00:13:13.044 "trsvcid": "4420" 00:13:13.044 }, 00:13:13.044 "peer_address": { 00:13:13.044 "trtype": "TCP", 00:13:13.044 "adrfam": "IPv4", 00:13:13.044 "traddr": "10.0.0.1", 00:13:13.044 "trsvcid": "51620" 00:13:13.044 }, 00:13:13.044 "auth": { 00:13:13.044 "state": "completed", 00:13:13.044 "digest": "sha256", 00:13:13.044 "dhgroup": "ffdhe2048" 00:13:13.044 } 00:13:13.044 } 00:13:13.044 ]' 00:13:13.044 01:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.303 01:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:13.303 01:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.303 01:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:13.303 01:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.303 01:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.303 01:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.303 01:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.562 01:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:13:13.562 01:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:13:14.130 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.130 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:14.130 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.130 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.130 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.130 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.130 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:14.130 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:14.699 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:13:14.699 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.699 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:14.699 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:14.699 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:14.699 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.699 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.699 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.699 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.699 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.699 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.699 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.699 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.959 00:13:14.959 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.959 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.959 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.218 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.218 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.218 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.218 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.218 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.218 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.218 { 00:13:15.218 "cntlid": 11, 00:13:15.218 "qid": 0, 00:13:15.218 "state": "enabled", 00:13:15.218 "thread": "nvmf_tgt_poll_group_000", 00:13:15.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:15.218 "listen_address": { 00:13:15.218 "trtype": "TCP", 00:13:15.218 "adrfam": "IPv4", 00:13:15.218 "traddr": "10.0.0.3", 00:13:15.218 "trsvcid": "4420" 00:13:15.218 }, 00:13:15.218 "peer_address": { 00:13:15.218 "trtype": "TCP", 00:13:15.218 "adrfam": "IPv4", 00:13:15.218 "traddr": "10.0.0.1", 00:13:15.218 "trsvcid": "51652" 00:13:15.218 }, 00:13:15.218 "auth": { 00:13:15.218 "state": "completed", 00:13:15.218 "digest": "sha256", 00:13:15.218 "dhgroup": "ffdhe2048" 00:13:15.218 } 00:13:15.218 } 00:13:15.218 ]' 00:13:15.218 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.218 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:15.218 01:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.218 01:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:15.218 01:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.218 01:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.218 01:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.218 01:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.477 01:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:13:15.477 01:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.414 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.982 00:13:16.982 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:16.982 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.982 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.241 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.241 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.241 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.241 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.241 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.241 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.241 { 00:13:17.241 "cntlid": 13, 00:13:17.241 "qid": 0, 00:13:17.241 "state": "enabled", 00:13:17.241 "thread": "nvmf_tgt_poll_group_000", 00:13:17.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:17.241 "listen_address": { 00:13:17.241 "trtype": "TCP", 00:13:17.241 "adrfam": "IPv4", 00:13:17.241 "traddr": "10.0.0.3", 00:13:17.241 "trsvcid": "4420" 00:13:17.241 }, 00:13:17.241 "peer_address": { 00:13:17.241 "trtype": "TCP", 00:13:17.241 "adrfam": "IPv4", 00:13:17.241 "traddr": "10.0.0.1", 00:13:17.241 "trsvcid": "33846" 00:13:17.241 }, 00:13:17.241 "auth": { 00:13:17.241 "state": "completed", 00:13:17.241 "digest": "sha256", 00:13:17.241 "dhgroup": "ffdhe2048" 00:13:17.241 } 00:13:17.241 } 00:13:17.241 ]' 00:13:17.241 01:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.241 01:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.241 01:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.241 01:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:17.241 01:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.241 01:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.241 01:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.241 01:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.500 01:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:13:17.500 01:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:13:18.068 01:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.327 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:18.327 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.327 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.327 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.327 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.327 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:18.327 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:18.586 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:13:18.586 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.586 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:18.586 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:18.586 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:18.586 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.586 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:13:18.586 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.586 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.586 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.586 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:18.586 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.586 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.846 00:13:18.846 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.846 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.846 01:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.413 01:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.413 01:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.413 01:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.413 01:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.413 01:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.413 01:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.413 { 00:13:19.413 "cntlid": 15, 00:13:19.413 "qid": 0, 00:13:19.413 "state": "enabled", 00:13:19.413 "thread": "nvmf_tgt_poll_group_000", 00:13:19.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:19.413 "listen_address": { 00:13:19.413 "trtype": "TCP", 00:13:19.413 "adrfam": "IPv4", 00:13:19.413 "traddr": "10.0.0.3", 00:13:19.413 "trsvcid": "4420" 00:13:19.413 }, 00:13:19.413 "peer_address": { 00:13:19.413 "trtype": "TCP", 00:13:19.413 "adrfam": "IPv4", 00:13:19.413 "traddr": "10.0.0.1", 00:13:19.413 "trsvcid": "33860" 00:13:19.413 }, 00:13:19.413 "auth": { 00:13:19.413 "state": "completed", 00:13:19.413 "digest": "sha256", 00:13:19.413 "dhgroup": "ffdhe2048" 00:13:19.413 } 00:13:19.413 } 00:13:19.413 ]' 00:13:19.413 01:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.413 01:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:19.413 01:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.413 01:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:19.413 01:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.413 01:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.413 01:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.413 01:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.672 01:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:13:19.672 01:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.607 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.174 00:13:21.174 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.174 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.174 01:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.433 01:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.433 01:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.433 01:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.433 01:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.433 01:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.433 01:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.433 { 00:13:21.433 "cntlid": 17, 00:13:21.433 "qid": 0, 00:13:21.433 "state": "enabled", 00:13:21.433 "thread": "nvmf_tgt_poll_group_000", 00:13:21.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:21.433 "listen_address": { 00:13:21.433 "trtype": "TCP", 00:13:21.433 "adrfam": "IPv4", 00:13:21.433 "traddr": "10.0.0.3", 00:13:21.433 "trsvcid": "4420" 00:13:21.433 }, 00:13:21.433 "peer_address": { 00:13:21.433 "trtype": "TCP", 00:13:21.433 "adrfam": "IPv4", 00:13:21.433 "traddr": "10.0.0.1", 00:13:21.433 "trsvcid": "33902" 00:13:21.433 }, 00:13:21.433 "auth": { 00:13:21.433 "state": "completed", 00:13:21.433 "digest": "sha256", 00:13:21.433 "dhgroup": "ffdhe3072" 00:13:21.433 } 00:13:21.433 } 00:13:21.433 ]' 00:13:21.433 01:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.433 01:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:21.433 01:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.433 01:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:21.433 01:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.433 01:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.433 01:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.433 01:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.733 01:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:13:21.733 01:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:13:22.668 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.668 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:22.668 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.668 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.668 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.668 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.668 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:22.668 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:22.927 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:13:22.927 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.927 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:22.927 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:22.927 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:22.927 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.927 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.927 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.927 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.927 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.927 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.927 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.927 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.185 00:13:23.185 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.185 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.185 01:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.444 01:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.444 01:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.444 01:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.444 01:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.444 01:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.444 01:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.444 { 00:13:23.444 "cntlid": 19, 00:13:23.444 "qid": 0, 00:13:23.444 "state": "enabled", 00:13:23.444 "thread": "nvmf_tgt_poll_group_000", 00:13:23.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:23.444 "listen_address": { 00:13:23.444 "trtype": "TCP", 00:13:23.444 "adrfam": "IPv4", 00:13:23.444 "traddr": "10.0.0.3", 00:13:23.444 "trsvcid": "4420" 00:13:23.444 }, 00:13:23.444 "peer_address": { 00:13:23.444 "trtype": "TCP", 00:13:23.444 "adrfam": "IPv4", 00:13:23.444 "traddr": "10.0.0.1", 00:13:23.444 "trsvcid": "33930" 00:13:23.444 }, 00:13:23.444 "auth": { 00:13:23.444 "state": "completed", 00:13:23.444 "digest": "sha256", 00:13:23.444 "dhgroup": "ffdhe3072" 00:13:23.444 } 00:13:23.444 } 00:13:23.444 ]' 00:13:23.444 01:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.444 01:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:23.444 01:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.702 01:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:23.702 01:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.703 01:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.703 01:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.703 01:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.961 01:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:13:23.961 01:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:13:24.529 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.529 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:24.529 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.529 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.529 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.529 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.529 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:24.529 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:24.788 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:13:24.788 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.788 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:24.788 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:24.788 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:24.788 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.788 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.788 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.788 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.047 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.047 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.047 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.047 01:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.306 00:13:25.306 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.306 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.306 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.565 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.565 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.565 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.565 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.565 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.565 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.565 { 00:13:25.565 "cntlid": 21, 00:13:25.565 "qid": 0, 00:13:25.565 "state": "enabled", 00:13:25.565 "thread": "nvmf_tgt_poll_group_000", 00:13:25.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:25.565 "listen_address": { 00:13:25.565 "trtype": "TCP", 00:13:25.565 "adrfam": "IPv4", 00:13:25.565 "traddr": "10.0.0.3", 00:13:25.565 "trsvcid": "4420" 00:13:25.565 }, 00:13:25.565 "peer_address": { 00:13:25.565 "trtype": "TCP", 00:13:25.565 "adrfam": "IPv4", 00:13:25.565 "traddr": "10.0.0.1", 00:13:25.565 "trsvcid": "33970" 00:13:25.565 }, 00:13:25.565 "auth": { 00:13:25.565 "state": "completed", 00:13:25.565 "digest": "sha256", 00:13:25.565 "dhgroup": "ffdhe3072" 00:13:25.565 } 00:13:25.565 } 00:13:25.565 ]' 00:13:25.565 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.565 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:25.565 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.565 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:25.823 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.823 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.824 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.824 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.082 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:13:26.082 01:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:13:26.650 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.650 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:26.650 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.650 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.650 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.650 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.650 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:26.650 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:26.909 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:13:26.909 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.909 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:26.909 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:26.909 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:26.909 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.909 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:13:26.909 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.909 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.909 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.909 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:26.909 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:26.909 01:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:27.477 00:13:27.477 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.477 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.477 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.736 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.736 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.736 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.736 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.736 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.736 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.736 { 00:13:27.736 "cntlid": 23, 00:13:27.736 "qid": 0, 00:13:27.736 "state": "enabled", 00:13:27.736 "thread": "nvmf_tgt_poll_group_000", 00:13:27.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:27.736 "listen_address": { 00:13:27.736 "trtype": "TCP", 00:13:27.736 "adrfam": "IPv4", 00:13:27.736 "traddr": "10.0.0.3", 00:13:27.736 "trsvcid": "4420" 00:13:27.736 }, 00:13:27.736 "peer_address": { 00:13:27.736 "trtype": "TCP", 00:13:27.736 "adrfam": "IPv4", 00:13:27.736 "traddr": "10.0.0.1", 00:13:27.736 "trsvcid": "35938" 00:13:27.736 }, 00:13:27.736 "auth": { 00:13:27.736 "state": "completed", 00:13:27.736 "digest": "sha256", 00:13:27.736 "dhgroup": "ffdhe3072" 00:13:27.736 } 00:13:27.736 } 00:13:27.736 ]' 00:13:27.736 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.736 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:27.736 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.736 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:27.736 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.736 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.736 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.736 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.994 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:13:27.994 01:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.930 01:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.497 00:13:29.497 01:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.497 01:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.497 01:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.755 01:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.755 01:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.755 01:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.755 01:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.755 01:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.755 01:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.755 { 00:13:29.755 "cntlid": 25, 00:13:29.755 "qid": 0, 00:13:29.755 "state": "enabled", 00:13:29.755 "thread": "nvmf_tgt_poll_group_000", 00:13:29.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:29.755 "listen_address": { 00:13:29.755 "trtype": "TCP", 00:13:29.755 "adrfam": "IPv4", 00:13:29.755 "traddr": "10.0.0.3", 00:13:29.755 "trsvcid": "4420" 00:13:29.755 }, 00:13:29.755 "peer_address": { 00:13:29.755 "trtype": "TCP", 00:13:29.755 "adrfam": "IPv4", 00:13:29.755 "traddr": "10.0.0.1", 00:13:29.755 "trsvcid": "35964" 00:13:29.755 }, 00:13:29.755 "auth": { 00:13:29.755 "state": "completed", 00:13:29.755 "digest": "sha256", 00:13:29.755 "dhgroup": "ffdhe4096" 00:13:29.755 } 00:13:29.755 } 00:13:29.755 ]' 00:13:29.755 01:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.755 01:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:29.755 01:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.755 01:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:29.755 01:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.756 01:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.756 01:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.756 01:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.329 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:13:30.329 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:13:30.935 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.935 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:30.935 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.935 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.935 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.935 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.935 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:30.935 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:31.194 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:13:31.194 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.194 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:31.194 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:31.194 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:31.194 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.194 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.194 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.194 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.194 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.194 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.194 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.195 01:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.453 00:13:31.453 01:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.453 01:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.453 01:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.712 01:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.712 01:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.712 01:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.712 01:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.712 01:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.712 01:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.712 { 00:13:31.712 "cntlid": 27, 00:13:31.712 "qid": 0, 00:13:31.712 "state": "enabled", 00:13:31.712 "thread": "nvmf_tgt_poll_group_000", 00:13:31.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:31.712 "listen_address": { 00:13:31.712 "trtype": "TCP", 00:13:31.712 "adrfam": "IPv4", 00:13:31.712 "traddr": "10.0.0.3", 00:13:31.712 "trsvcid": "4420" 00:13:31.712 }, 00:13:31.712 "peer_address": { 00:13:31.712 "trtype": "TCP", 00:13:31.712 "adrfam": "IPv4", 00:13:31.712 "traddr": "10.0.0.1", 00:13:31.712 "trsvcid": "35994" 00:13:31.712 }, 00:13:31.712 "auth": { 00:13:31.712 "state": "completed", 00:13:31.712 "digest": "sha256", 00:13:31.712 "dhgroup": "ffdhe4096" 00:13:31.712 } 00:13:31.712 } 00:13:31.712 ]' 00:13:31.712 01:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.971 01:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:31.971 01:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.971 01:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:31.971 01:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.971 01:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.971 01:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.971 01:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.230 01:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:13:32.230 01:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:13:32.798 01:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.798 01:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:32.798 01:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.798 01:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.798 01:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.798 01:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.798 01:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:32.798 01:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:33.366 01:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:13:33.366 01:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.366 01:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:33.366 01:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:33.366 01:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:33.366 01:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.366 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.366 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.366 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.366 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.366 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.366 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.366 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.625 00:13:33.625 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.625 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.625 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.885 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.885 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.885 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.885 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.885 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.885 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.885 { 00:13:33.885 "cntlid": 29, 00:13:33.885 "qid": 0, 00:13:33.885 "state": "enabled", 00:13:33.885 "thread": "nvmf_tgt_poll_group_000", 00:13:33.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:33.885 "listen_address": { 00:13:33.885 "trtype": "TCP", 00:13:33.885 "adrfam": "IPv4", 00:13:33.885 "traddr": "10.0.0.3", 00:13:33.885 "trsvcid": "4420" 00:13:33.885 }, 00:13:33.885 "peer_address": { 00:13:33.885 "trtype": "TCP", 00:13:33.885 "adrfam": "IPv4", 00:13:33.885 "traddr": "10.0.0.1", 00:13:33.885 "trsvcid": "36008" 00:13:33.885 }, 00:13:33.885 "auth": { 00:13:33.885 "state": "completed", 00:13:33.885 "digest": "sha256", 00:13:33.885 "dhgroup": "ffdhe4096" 00:13:33.885 } 00:13:33.885 } 00:13:33.885 ]' 00:13:33.885 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.885 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:33.885 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.885 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:33.885 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.885 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.885 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.885 01:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.453 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:13:34.453 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:13:35.021 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.021 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:35.021 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.021 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.021 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.021 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:35.021 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:35.021 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:35.281 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:13:35.281 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.281 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:35.281 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:35.281 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:35.281 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.281 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:13:35.281 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.281 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.281 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.281 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:35.281 01:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:35.281 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:35.540 00:13:35.540 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.540 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.540 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.799 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.799 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.799 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.799 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.799 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.799 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.799 { 00:13:35.799 "cntlid": 31, 00:13:35.799 "qid": 0, 00:13:35.799 "state": "enabled", 00:13:35.799 "thread": "nvmf_tgt_poll_group_000", 00:13:35.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:35.799 "listen_address": { 00:13:35.799 "trtype": "TCP", 00:13:35.799 "adrfam": "IPv4", 00:13:35.799 "traddr": "10.0.0.3", 00:13:35.799 "trsvcid": "4420" 00:13:35.799 }, 00:13:35.799 "peer_address": { 00:13:35.799 "trtype": "TCP", 00:13:35.799 "adrfam": "IPv4", 00:13:35.799 "traddr": "10.0.0.1", 00:13:35.799 "trsvcid": "37328" 00:13:35.799 }, 00:13:35.799 "auth": { 00:13:35.799 "state": "completed", 00:13:35.799 "digest": "sha256", 00:13:35.799 "dhgroup": "ffdhe4096" 00:13:35.799 } 00:13:35.799 } 00:13:35.799 ]' 00:13:35.799 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.799 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:35.799 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:36.059 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:36.059 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:36.059 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.059 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.059 01:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.318 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:13:36.318 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:13:36.886 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.886 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:36.886 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.886 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.886 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.886 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:36.886 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.886 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:36.886 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:37.145 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:37.145 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.145 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:37.145 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:37.145 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:37.145 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.145 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.145 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.145 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.145 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.145 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.145 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.145 01:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.713 00:13:37.713 01:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.713 01:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.713 01:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.972 01:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.972 01:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.972 01:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.972 01:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.972 01:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.972 01:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.972 { 00:13:37.972 "cntlid": 33, 00:13:37.972 "qid": 0, 00:13:37.972 "state": "enabled", 00:13:37.972 "thread": "nvmf_tgt_poll_group_000", 00:13:37.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:37.972 "listen_address": { 00:13:37.972 "trtype": "TCP", 00:13:37.972 "adrfam": "IPv4", 00:13:37.972 "traddr": "10.0.0.3", 00:13:37.972 "trsvcid": "4420" 00:13:37.972 }, 00:13:37.972 "peer_address": { 00:13:37.972 "trtype": "TCP", 00:13:37.972 "adrfam": "IPv4", 00:13:37.972 "traddr": "10.0.0.1", 00:13:37.972 "trsvcid": "37354" 00:13:37.972 }, 00:13:37.972 "auth": { 00:13:37.972 "state": "completed", 00:13:37.972 "digest": "sha256", 00:13:37.972 "dhgroup": "ffdhe6144" 00:13:37.972 } 00:13:37.972 } 00:13:37.972 ]' 00:13:37.972 01:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.972 01:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:37.972 01:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.972 01:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:37.972 01:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.231 01:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.231 01:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.231 01:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.491 01:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:13:38.491 01:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:13:39.061 01:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.061 01:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:39.061 01:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.061 01:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.061 01:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.061 01:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.061 01:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:39.061 01:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:39.325 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:39.325 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.325 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:39.325 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:39.325 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:39.325 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.325 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.325 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.325 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.325 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.325 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.325 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.325 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.584 00:13:39.843 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.843 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.843 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.104 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.104 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.104 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.104 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.104 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.104 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:40.104 { 00:13:40.104 "cntlid": 35, 00:13:40.104 "qid": 0, 00:13:40.104 "state": "enabled", 00:13:40.104 "thread": "nvmf_tgt_poll_group_000", 00:13:40.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:40.104 "listen_address": { 00:13:40.104 "trtype": "TCP", 00:13:40.104 "adrfam": "IPv4", 00:13:40.104 "traddr": "10.0.0.3", 00:13:40.104 "trsvcid": "4420" 00:13:40.104 }, 00:13:40.104 "peer_address": { 00:13:40.104 "trtype": "TCP", 00:13:40.104 "adrfam": "IPv4", 00:13:40.104 "traddr": "10.0.0.1", 00:13:40.104 "trsvcid": "37378" 00:13:40.104 }, 00:13:40.104 "auth": { 00:13:40.104 "state": "completed", 00:13:40.104 "digest": "sha256", 00:13:40.104 "dhgroup": "ffdhe6144" 00:13:40.104 } 00:13:40.104 } 00:13:40.104 ]' 00:13:40.104 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:40.104 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:40.104 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:40.104 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:40.104 01:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.104 01:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.104 01:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.104 01:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.673 01:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:13:40.673 01:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:13:41.240 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.241 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:41.241 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.241 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.241 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.241 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.241 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:41.241 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:41.500 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:41.500 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.500 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:41.500 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:41.500 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:41.500 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.500 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.500 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.500 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.500 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.500 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.500 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.500 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.067 00:13:42.067 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.067 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.067 01:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.326 01:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.326 01:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.326 01:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.326 01:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.326 01:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.326 01:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.326 { 00:13:42.326 "cntlid": 37, 00:13:42.326 "qid": 0, 00:13:42.326 "state": "enabled", 00:13:42.326 "thread": "nvmf_tgt_poll_group_000", 00:13:42.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:42.326 "listen_address": { 00:13:42.326 "trtype": "TCP", 00:13:42.326 "adrfam": "IPv4", 00:13:42.326 "traddr": "10.0.0.3", 00:13:42.326 "trsvcid": "4420" 00:13:42.326 }, 00:13:42.326 "peer_address": { 00:13:42.326 "trtype": "TCP", 00:13:42.326 "adrfam": "IPv4", 00:13:42.326 "traddr": "10.0.0.1", 00:13:42.326 "trsvcid": "37396" 00:13:42.326 }, 00:13:42.326 "auth": { 00:13:42.326 "state": "completed", 00:13:42.326 "digest": "sha256", 00:13:42.326 "dhgroup": "ffdhe6144" 00:13:42.326 } 00:13:42.326 } 00:13:42.326 ]' 00:13:42.326 01:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:42.326 01:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:42.326 01:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.326 01:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:42.326 01:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.326 01:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.326 01:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.326 01:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.895 01:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:13:42.895 01:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:13:43.463 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.463 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:43.463 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.463 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.463 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.463 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.463 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:43.463 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:43.722 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:43.722 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.722 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:43.722 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:43.722 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:43.722 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.722 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:13:43.722 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.722 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.722 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.722 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:43.722 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.722 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.982 00:13:43.982 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.982 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.982 01:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.549 01:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.549 01:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.549 01:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.549 01:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.549 01:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.549 01:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.549 { 00:13:44.549 "cntlid": 39, 00:13:44.549 "qid": 0, 00:13:44.549 "state": "enabled", 00:13:44.549 "thread": "nvmf_tgt_poll_group_000", 00:13:44.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:44.549 "listen_address": { 00:13:44.549 "trtype": "TCP", 00:13:44.549 "adrfam": "IPv4", 00:13:44.549 "traddr": "10.0.0.3", 00:13:44.549 "trsvcid": "4420" 00:13:44.549 }, 00:13:44.549 "peer_address": { 00:13:44.549 "trtype": "TCP", 00:13:44.549 "adrfam": "IPv4", 00:13:44.549 "traddr": "10.0.0.1", 00:13:44.549 "trsvcid": "37412" 00:13:44.549 }, 00:13:44.549 "auth": { 00:13:44.549 "state": "completed", 00:13:44.549 "digest": "sha256", 00:13:44.549 "dhgroup": "ffdhe6144" 00:13:44.549 } 00:13:44.549 } 00:13:44.549 ]' 00:13:44.549 01:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.550 01:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:44.550 01:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.550 01:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:44.550 01:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.550 01:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.550 01:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.550 01:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.808 01:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:13:44.809 01:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.746 01:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.315 00:13:46.315 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.315 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.315 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.883 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.883 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.883 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.883 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.883 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.883 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.883 { 00:13:46.883 "cntlid": 41, 00:13:46.883 "qid": 0, 00:13:46.883 "state": "enabled", 00:13:46.883 "thread": "nvmf_tgt_poll_group_000", 00:13:46.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:46.883 "listen_address": { 00:13:46.883 "trtype": "TCP", 00:13:46.883 "adrfam": "IPv4", 00:13:46.883 "traddr": "10.0.0.3", 00:13:46.883 "trsvcid": "4420" 00:13:46.883 }, 00:13:46.883 "peer_address": { 00:13:46.883 "trtype": "TCP", 00:13:46.883 "adrfam": "IPv4", 00:13:46.883 "traddr": "10.0.0.1", 00:13:46.883 "trsvcid": "46394" 00:13:46.883 }, 00:13:46.883 "auth": { 00:13:46.883 "state": "completed", 00:13:46.883 "digest": "sha256", 00:13:46.883 "dhgroup": "ffdhe8192" 00:13:46.883 } 00:13:46.883 } 00:13:46.883 ]' 00:13:46.883 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.883 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:46.883 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.883 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:46.883 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.883 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.883 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.883 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.142 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:13:47.142 01:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.079 01:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.019 00:13:49.019 01:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.019 01:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.019 01:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.019 01:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.019 01:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.019 01:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.019 01:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.019 01:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.019 01:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.019 { 00:13:49.019 "cntlid": 43, 00:13:49.019 "qid": 0, 00:13:49.019 "state": "enabled", 00:13:49.019 "thread": "nvmf_tgt_poll_group_000", 00:13:49.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:49.019 "listen_address": { 00:13:49.019 "trtype": "TCP", 00:13:49.019 "adrfam": "IPv4", 00:13:49.019 "traddr": "10.0.0.3", 00:13:49.019 "trsvcid": "4420" 00:13:49.019 }, 00:13:49.019 "peer_address": { 00:13:49.019 "trtype": "TCP", 00:13:49.019 "adrfam": "IPv4", 00:13:49.019 "traddr": "10.0.0.1", 00:13:49.019 "trsvcid": "46402" 00:13:49.019 }, 00:13:49.019 "auth": { 00:13:49.019 "state": "completed", 00:13:49.019 "digest": "sha256", 00:13:49.019 "dhgroup": "ffdhe8192" 00:13:49.019 } 00:13:49.019 } 00:13:49.019 ]' 00:13:49.019 01:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.278 01:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:49.278 01:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.278 01:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:49.278 01:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.278 01:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.278 01:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.278 01:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.537 01:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:13:49.537 01:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:13:50.471 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.472 01:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.408 00:13:51.408 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.408 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.408 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.408 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.408 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.408 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.408 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.408 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.408 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.408 { 00:13:51.408 "cntlid": 45, 00:13:51.408 "qid": 0, 00:13:51.408 "state": "enabled", 00:13:51.408 "thread": "nvmf_tgt_poll_group_000", 00:13:51.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:51.408 "listen_address": { 00:13:51.408 "trtype": "TCP", 00:13:51.408 "adrfam": "IPv4", 00:13:51.408 "traddr": "10.0.0.3", 00:13:51.408 "trsvcid": "4420" 00:13:51.408 }, 00:13:51.408 "peer_address": { 00:13:51.408 "trtype": "TCP", 00:13:51.408 "adrfam": "IPv4", 00:13:51.408 "traddr": "10.0.0.1", 00:13:51.408 "trsvcid": "46430" 00:13:51.408 }, 00:13:51.408 "auth": { 00:13:51.408 "state": "completed", 00:13:51.408 "digest": "sha256", 00:13:51.408 "dhgroup": "ffdhe8192" 00:13:51.408 } 00:13:51.408 } 00:13:51.408 ]' 00:13:51.408 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.667 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:51.667 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.667 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:51.667 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.667 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.667 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.667 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.925 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:13:51.925 01:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:13:52.492 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.492 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:52.492 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.492 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.492 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.492 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.492 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:52.492 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:52.751 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:52.751 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.751 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:52.751 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:52.751 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:52.751 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.751 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:13:52.751 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.751 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.751 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.751 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:52.751 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.751 01:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:53.319 00:13:53.319 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.319 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.319 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.887 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.887 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.887 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.887 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.887 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.887 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.887 { 00:13:53.887 "cntlid": 47, 00:13:53.887 "qid": 0, 00:13:53.887 "state": "enabled", 00:13:53.887 "thread": "nvmf_tgt_poll_group_000", 00:13:53.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:53.887 "listen_address": { 00:13:53.887 "trtype": "TCP", 00:13:53.887 "adrfam": "IPv4", 00:13:53.887 "traddr": "10.0.0.3", 00:13:53.887 "trsvcid": "4420" 00:13:53.887 }, 00:13:53.887 "peer_address": { 00:13:53.887 "trtype": "TCP", 00:13:53.887 "adrfam": "IPv4", 00:13:53.887 "traddr": "10.0.0.1", 00:13:53.887 "trsvcid": "46456" 00:13:53.887 }, 00:13:53.887 "auth": { 00:13:53.887 "state": "completed", 00:13:53.887 "digest": "sha256", 00:13:53.887 "dhgroup": "ffdhe8192" 00:13:53.887 } 00:13:53.887 } 00:13:53.887 ]' 00:13:53.887 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.887 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:53.887 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.887 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:53.887 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.887 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.887 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.887 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.146 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:13:54.146 01:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.084 01:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.084 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.084 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.084 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.084 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.652 00:13:55.653 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.653 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.653 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.912 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.912 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.912 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.912 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.912 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.912 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.912 { 00:13:55.912 "cntlid": 49, 00:13:55.912 "qid": 0, 00:13:55.912 "state": "enabled", 00:13:55.912 "thread": "nvmf_tgt_poll_group_000", 00:13:55.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:55.912 "listen_address": { 00:13:55.912 "trtype": "TCP", 00:13:55.912 "adrfam": "IPv4", 00:13:55.912 "traddr": "10.0.0.3", 00:13:55.912 "trsvcid": "4420" 00:13:55.912 }, 00:13:55.912 "peer_address": { 00:13:55.912 "trtype": "TCP", 00:13:55.912 "adrfam": "IPv4", 00:13:55.912 "traddr": "10.0.0.1", 00:13:55.912 "trsvcid": "40258" 00:13:55.912 }, 00:13:55.912 "auth": { 00:13:55.912 "state": "completed", 00:13:55.912 "digest": "sha384", 00:13:55.912 "dhgroup": "null" 00:13:55.912 } 00:13:55.912 } 00:13:55.912 ]' 00:13:55.912 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.912 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:55.912 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.912 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:55.912 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.912 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.912 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.912 01:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.171 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:13:56.171 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:13:56.739 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.739 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:56.739 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.739 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.739 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.739 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.739 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:56.739 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:57.307 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:57.307 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:57.307 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:57.307 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:57.307 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:57.307 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.307 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.307 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.307 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.307 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.307 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.307 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.307 01:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.568 00:13:57.568 01:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:57.568 01:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.568 01:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.827 01:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.827 01:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.827 01:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.827 01:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.827 01:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.827 01:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.827 { 00:13:57.827 "cntlid": 51, 00:13:57.827 "qid": 0, 00:13:57.827 "state": "enabled", 00:13:57.827 "thread": "nvmf_tgt_poll_group_000", 00:13:57.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:57.827 "listen_address": { 00:13:57.827 "trtype": "TCP", 00:13:57.827 "adrfam": "IPv4", 00:13:57.827 "traddr": "10.0.0.3", 00:13:57.827 "trsvcid": "4420" 00:13:57.827 }, 00:13:57.827 "peer_address": { 00:13:57.827 "trtype": "TCP", 00:13:57.827 "adrfam": "IPv4", 00:13:57.827 "traddr": "10.0.0.1", 00:13:57.827 "trsvcid": "40268" 00:13:57.827 }, 00:13:57.827 "auth": { 00:13:57.827 "state": "completed", 00:13:57.827 "digest": "sha384", 00:13:57.827 "dhgroup": "null" 00:13:57.827 } 00:13:57.827 } 00:13:57.827 ]' 00:13:57.827 01:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.827 01:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:57.827 01:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.827 01:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:57.827 01:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:58.086 01:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.086 01:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.086 01:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.345 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:13:58.345 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:13:58.913 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.913 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:13:58.913 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.913 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.913 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.913 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.913 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:58.913 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:59.172 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:59.172 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.172 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:59.172 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:59.172 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:59.172 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.173 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.173 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.173 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.173 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.173 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.173 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.173 01:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.432 00:13:59.432 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.432 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.432 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.691 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.691 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.691 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.691 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.691 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.691 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.691 { 00:13:59.691 "cntlid": 53, 00:13:59.691 "qid": 0, 00:13:59.691 "state": "enabled", 00:13:59.691 "thread": "nvmf_tgt_poll_group_000", 00:13:59.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:13:59.691 "listen_address": { 00:13:59.691 "trtype": "TCP", 00:13:59.691 "adrfam": "IPv4", 00:13:59.691 "traddr": "10.0.0.3", 00:13:59.691 "trsvcid": "4420" 00:13:59.691 }, 00:13:59.691 "peer_address": { 00:13:59.691 "trtype": "TCP", 00:13:59.691 "adrfam": "IPv4", 00:13:59.691 "traddr": "10.0.0.1", 00:13:59.691 "trsvcid": "40300" 00:13:59.691 }, 00:13:59.691 "auth": { 00:13:59.691 "state": "completed", 00:13:59.691 "digest": "sha384", 00:13:59.691 "dhgroup": "null" 00:13:59.691 } 00:13:59.691 } 00:13:59.691 ]' 00:13:59.691 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.691 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:59.691 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.691 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:59.691 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.950 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.950 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.950 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.209 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:14:00.209 01:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:14:00.777 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.777 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:00.777 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.777 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.777 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.777 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.778 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:00.778 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:01.037 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:01.037 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.037 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:01.037 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:01.037 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:01.037 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.037 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:14:01.037 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.037 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.037 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.037 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:01.037 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.037 01:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.296 00:14:01.296 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.296 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.296 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.555 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.555 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.555 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.555 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.555 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.555 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.555 { 00:14:01.555 "cntlid": 55, 00:14:01.555 "qid": 0, 00:14:01.555 "state": "enabled", 00:14:01.555 "thread": "nvmf_tgt_poll_group_000", 00:14:01.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:01.555 "listen_address": { 00:14:01.555 "trtype": "TCP", 00:14:01.555 "adrfam": "IPv4", 00:14:01.555 "traddr": "10.0.0.3", 00:14:01.555 "trsvcid": "4420" 00:14:01.555 }, 00:14:01.555 "peer_address": { 00:14:01.555 "trtype": "TCP", 00:14:01.555 "adrfam": "IPv4", 00:14:01.555 "traddr": "10.0.0.1", 00:14:01.555 "trsvcid": "40326" 00:14:01.555 }, 00:14:01.555 "auth": { 00:14:01.555 "state": "completed", 00:14:01.555 "digest": "sha384", 00:14:01.555 "dhgroup": "null" 00:14:01.555 } 00:14:01.555 } 00:14:01.555 ]' 00:14:01.555 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.555 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:01.555 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.555 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:01.555 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.814 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.814 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.814 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.074 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:14:02.074 01:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:14:02.642 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.642 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:02.642 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.642 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.642 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.642 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:02.642 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.642 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:02.642 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:02.902 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:02.902 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.902 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:02.902 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:02.902 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:02.902 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.902 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.902 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.902 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.902 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.902 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.902 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.902 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.161 00:14:03.161 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.161 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.161 01:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.420 01:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.420 01:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.420 01:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.420 01:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.420 01:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.420 01:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.420 { 00:14:03.420 "cntlid": 57, 00:14:03.420 "qid": 0, 00:14:03.420 "state": "enabled", 00:14:03.420 "thread": "nvmf_tgt_poll_group_000", 00:14:03.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:03.420 "listen_address": { 00:14:03.420 "trtype": "TCP", 00:14:03.420 "adrfam": "IPv4", 00:14:03.420 "traddr": "10.0.0.3", 00:14:03.420 "trsvcid": "4420" 00:14:03.420 }, 00:14:03.420 "peer_address": { 00:14:03.420 "trtype": "TCP", 00:14:03.420 "adrfam": "IPv4", 00:14:03.420 "traddr": "10.0.0.1", 00:14:03.420 "trsvcid": "40346" 00:14:03.420 }, 00:14:03.420 "auth": { 00:14:03.420 "state": "completed", 00:14:03.420 "digest": "sha384", 00:14:03.420 "dhgroup": "ffdhe2048" 00:14:03.420 } 00:14:03.420 } 00:14:03.420 ]' 00:14:03.420 01:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.420 01:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:03.420 01:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.420 01:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:03.420 01:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.679 01:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.679 01:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.679 01:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.939 01:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:14:03.939 01:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:14:04.507 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.507 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:04.507 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.507 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.507 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.507 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.507 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:04.507 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:04.766 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:14:04.766 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.766 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:04.766 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:04.766 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:04.766 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.766 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.766 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.766 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.766 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.766 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.766 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.766 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.025 00:14:05.025 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.025 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.025 01:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.284 01:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.284 01:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.284 01:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.284 01:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.284 01:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.284 01:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.284 { 00:14:05.284 "cntlid": 59, 00:14:05.284 "qid": 0, 00:14:05.284 "state": "enabled", 00:14:05.284 "thread": "nvmf_tgt_poll_group_000", 00:14:05.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:05.284 "listen_address": { 00:14:05.284 "trtype": "TCP", 00:14:05.284 "adrfam": "IPv4", 00:14:05.284 "traddr": "10.0.0.3", 00:14:05.284 "trsvcid": "4420" 00:14:05.284 }, 00:14:05.284 "peer_address": { 00:14:05.284 "trtype": "TCP", 00:14:05.284 "adrfam": "IPv4", 00:14:05.284 "traddr": "10.0.0.1", 00:14:05.284 "trsvcid": "40378" 00:14:05.284 }, 00:14:05.284 "auth": { 00:14:05.284 "state": "completed", 00:14:05.284 "digest": "sha384", 00:14:05.284 "dhgroup": "ffdhe2048" 00:14:05.284 } 00:14:05.284 } 00:14:05.284 ]' 00:14:05.284 01:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.543 01:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:05.543 01:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.543 01:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:05.543 01:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.543 01:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.543 01:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.543 01:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.802 01:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:14:05.802 01:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:14:06.371 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.371 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:06.371 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.371 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.371 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.371 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.371 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:06.371 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:06.649 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:14:06.649 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.649 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:06.649 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:06.649 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:06.650 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.650 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.650 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.650 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.650 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.650 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.650 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.650 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.920 00:14:06.920 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.920 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.920 01:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.179 01:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.179 01:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.179 01:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.179 01:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.179 01:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.179 01:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.179 { 00:14:07.179 "cntlid": 61, 00:14:07.179 "qid": 0, 00:14:07.179 "state": "enabled", 00:14:07.179 "thread": "nvmf_tgt_poll_group_000", 00:14:07.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:07.179 "listen_address": { 00:14:07.179 "trtype": "TCP", 00:14:07.179 "adrfam": "IPv4", 00:14:07.179 "traddr": "10.0.0.3", 00:14:07.179 "trsvcid": "4420" 00:14:07.179 }, 00:14:07.179 "peer_address": { 00:14:07.179 "trtype": "TCP", 00:14:07.179 "adrfam": "IPv4", 00:14:07.179 "traddr": "10.0.0.1", 00:14:07.179 "trsvcid": "34946" 00:14:07.179 }, 00:14:07.179 "auth": { 00:14:07.179 "state": "completed", 00:14:07.179 "digest": "sha384", 00:14:07.179 "dhgroup": "ffdhe2048" 00:14:07.179 } 00:14:07.179 } 00:14:07.179 ]' 00:14:07.179 01:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.179 01:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:07.179 01:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.438 01:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:07.438 01:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:07.438 01:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.438 01:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.438 01:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.698 01:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:14:07.698 01:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:14:08.265 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.265 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:08.265 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.265 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.265 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.265 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:08.265 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:08.265 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:08.524 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:14:08.524 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.524 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:08.524 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:08.524 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:08.524 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.524 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:14:08.524 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.524 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.524 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.524 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:08.524 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:08.524 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:08.783 00:14:09.042 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:09.042 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.042 01:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.301 01:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.301 01:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.301 01:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.301 01:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.301 01:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.301 01:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.301 { 00:14:09.301 "cntlid": 63, 00:14:09.301 "qid": 0, 00:14:09.301 "state": "enabled", 00:14:09.301 "thread": "nvmf_tgt_poll_group_000", 00:14:09.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:09.301 "listen_address": { 00:14:09.301 "trtype": "TCP", 00:14:09.301 "adrfam": "IPv4", 00:14:09.301 "traddr": "10.0.0.3", 00:14:09.301 "trsvcid": "4420" 00:14:09.301 }, 00:14:09.301 "peer_address": { 00:14:09.301 "trtype": "TCP", 00:14:09.301 "adrfam": "IPv4", 00:14:09.301 "traddr": "10.0.0.1", 00:14:09.301 "trsvcid": "34964" 00:14:09.301 }, 00:14:09.301 "auth": { 00:14:09.301 "state": "completed", 00:14:09.301 "digest": "sha384", 00:14:09.301 "dhgroup": "ffdhe2048" 00:14:09.301 } 00:14:09.301 } 00:14:09.301 ]' 00:14:09.301 01:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.301 01:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:09.301 01:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.301 01:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:09.301 01:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.301 01:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.301 01:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.301 01:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.560 01:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:14:09.560 01:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.496 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.064 00:14:11.064 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.064 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.064 01:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.323 01:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.323 01:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.323 01:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.323 01:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.323 01:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.323 01:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.323 { 00:14:11.323 "cntlid": 65, 00:14:11.323 "qid": 0, 00:14:11.323 "state": "enabled", 00:14:11.323 "thread": "nvmf_tgt_poll_group_000", 00:14:11.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:11.323 "listen_address": { 00:14:11.323 "trtype": "TCP", 00:14:11.323 "adrfam": "IPv4", 00:14:11.323 "traddr": "10.0.0.3", 00:14:11.323 "trsvcid": "4420" 00:14:11.323 }, 00:14:11.323 "peer_address": { 00:14:11.323 "trtype": "TCP", 00:14:11.323 "adrfam": "IPv4", 00:14:11.323 "traddr": "10.0.0.1", 00:14:11.323 "trsvcid": "34978" 00:14:11.323 }, 00:14:11.323 "auth": { 00:14:11.323 "state": "completed", 00:14:11.323 "digest": "sha384", 00:14:11.323 "dhgroup": "ffdhe3072" 00:14:11.323 } 00:14:11.323 } 00:14:11.323 ]' 00:14:11.323 01:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.323 01:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:11.323 01:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.323 01:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:11.323 01:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.323 01:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.323 01:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.323 01:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.582 01:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:14:11.582 01:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:14:12.149 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.408 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:12.408 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.408 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.408 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.408 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.408 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:12.408 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:12.667 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:14:12.667 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.667 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:12.667 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:12.667 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:12.667 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.667 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.667 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.667 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.667 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.667 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.667 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.667 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.925 00:14:12.925 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.925 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.925 01:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.184 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.184 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.184 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.184 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.184 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.184 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.184 { 00:14:13.184 "cntlid": 67, 00:14:13.184 "qid": 0, 00:14:13.184 "state": "enabled", 00:14:13.184 "thread": "nvmf_tgt_poll_group_000", 00:14:13.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:13.184 "listen_address": { 00:14:13.184 "trtype": "TCP", 00:14:13.184 "adrfam": "IPv4", 00:14:13.184 "traddr": "10.0.0.3", 00:14:13.184 "trsvcid": "4420" 00:14:13.184 }, 00:14:13.184 "peer_address": { 00:14:13.184 "trtype": "TCP", 00:14:13.184 "adrfam": "IPv4", 00:14:13.184 "traddr": "10.0.0.1", 00:14:13.184 "trsvcid": "35000" 00:14:13.184 }, 00:14:13.184 "auth": { 00:14:13.184 "state": "completed", 00:14:13.184 "digest": "sha384", 00:14:13.184 "dhgroup": "ffdhe3072" 00:14:13.184 } 00:14:13.184 } 00:14:13.184 ]' 00:14:13.184 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.184 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:13.184 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.442 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:13.442 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.442 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.442 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.442 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.701 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:14:13.701 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:14:14.267 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.267 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:14.267 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.267 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.268 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.268 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.268 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:14.268 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:14.835 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:14:14.835 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.835 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:14.835 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:14.835 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:14.835 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.835 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.835 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.835 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.835 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.835 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.835 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.835 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.094 00:14:15.094 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.094 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.094 01:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.354 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.354 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.354 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.354 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.354 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.354 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.354 { 00:14:15.354 "cntlid": 69, 00:14:15.354 "qid": 0, 00:14:15.354 "state": "enabled", 00:14:15.354 "thread": "nvmf_tgt_poll_group_000", 00:14:15.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:15.354 "listen_address": { 00:14:15.354 "trtype": "TCP", 00:14:15.354 "adrfam": "IPv4", 00:14:15.354 "traddr": "10.0.0.3", 00:14:15.354 "trsvcid": "4420" 00:14:15.354 }, 00:14:15.354 "peer_address": { 00:14:15.354 "trtype": "TCP", 00:14:15.354 "adrfam": "IPv4", 00:14:15.354 "traddr": "10.0.0.1", 00:14:15.354 "trsvcid": "35034" 00:14:15.354 }, 00:14:15.354 "auth": { 00:14:15.354 "state": "completed", 00:14:15.354 "digest": "sha384", 00:14:15.354 "dhgroup": "ffdhe3072" 00:14:15.354 } 00:14:15.354 } 00:14:15.354 ]' 00:14:15.354 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.354 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:15.354 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.354 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:15.354 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.613 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.613 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.613 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.613 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:14:15.613 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:14:16.565 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.566 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:16.566 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.566 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.566 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.566 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.566 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:16.566 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:16.845 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:14:16.845 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.845 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:16.845 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:16.845 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:16.845 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.845 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:14:16.845 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.845 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.845 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.845 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:16.845 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:16.845 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:17.103 00:14:17.103 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.103 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.103 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.362 01:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.362 01:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.362 01:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.362 01:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.362 01:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.362 01:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.362 { 00:14:17.362 "cntlid": 71, 00:14:17.362 "qid": 0, 00:14:17.362 "state": "enabled", 00:14:17.362 "thread": "nvmf_tgt_poll_group_000", 00:14:17.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:17.362 "listen_address": { 00:14:17.362 "trtype": "TCP", 00:14:17.362 "adrfam": "IPv4", 00:14:17.362 "traddr": "10.0.0.3", 00:14:17.362 "trsvcid": "4420" 00:14:17.362 }, 00:14:17.362 "peer_address": { 00:14:17.362 "trtype": "TCP", 00:14:17.362 "adrfam": "IPv4", 00:14:17.362 "traddr": "10.0.0.1", 00:14:17.362 "trsvcid": "55938" 00:14:17.362 }, 00:14:17.362 "auth": { 00:14:17.362 "state": "completed", 00:14:17.362 "digest": "sha384", 00:14:17.362 "dhgroup": "ffdhe3072" 00:14:17.362 } 00:14:17.362 } 00:14:17.362 ]' 00:14:17.362 01:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.362 01:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:17.362 01:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.362 01:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:17.362 01:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.621 01:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.621 01:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.622 01:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.880 01:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:14:17.880 01:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:14:18.448 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.449 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:18.449 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.449 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.449 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.449 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:18.449 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.449 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:18.449 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:18.708 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:14:18.708 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.708 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:18.708 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:18.708 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:18.708 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.708 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.708 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.708 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.708 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.708 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.708 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.708 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.967 00:14:18.967 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.967 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.967 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.226 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.226 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.226 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.226 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.485 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.485 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.485 { 00:14:19.485 "cntlid": 73, 00:14:19.485 "qid": 0, 00:14:19.485 "state": "enabled", 00:14:19.485 "thread": "nvmf_tgt_poll_group_000", 00:14:19.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:19.485 "listen_address": { 00:14:19.485 "trtype": "TCP", 00:14:19.485 "adrfam": "IPv4", 00:14:19.485 "traddr": "10.0.0.3", 00:14:19.485 "trsvcid": "4420" 00:14:19.485 }, 00:14:19.485 "peer_address": { 00:14:19.485 "trtype": "TCP", 00:14:19.485 "adrfam": "IPv4", 00:14:19.485 "traddr": "10.0.0.1", 00:14:19.485 "trsvcid": "55970" 00:14:19.485 }, 00:14:19.485 "auth": { 00:14:19.485 "state": "completed", 00:14:19.485 "digest": "sha384", 00:14:19.485 "dhgroup": "ffdhe4096" 00:14:19.485 } 00:14:19.485 } 00:14:19.485 ]' 00:14:19.485 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.485 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:19.485 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.485 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:19.486 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.486 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.486 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.486 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.744 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:14:19.744 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:14:20.313 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.313 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:20.313 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.313 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.313 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.313 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.313 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:20.313 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:20.572 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:14:20.572 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.572 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:20.573 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:20.573 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:20.573 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.573 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.573 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.573 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.832 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.832 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.832 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.832 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.091 00:14:21.091 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.091 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.091 01:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.351 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.351 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.351 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.351 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.351 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.351 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.351 { 00:14:21.351 "cntlid": 75, 00:14:21.351 "qid": 0, 00:14:21.351 "state": "enabled", 00:14:21.351 "thread": "nvmf_tgt_poll_group_000", 00:14:21.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:21.351 "listen_address": { 00:14:21.351 "trtype": "TCP", 00:14:21.351 "adrfam": "IPv4", 00:14:21.351 "traddr": "10.0.0.3", 00:14:21.351 "trsvcid": "4420" 00:14:21.351 }, 00:14:21.351 "peer_address": { 00:14:21.351 "trtype": "TCP", 00:14:21.351 "adrfam": "IPv4", 00:14:21.351 "traddr": "10.0.0.1", 00:14:21.351 "trsvcid": "55990" 00:14:21.351 }, 00:14:21.351 "auth": { 00:14:21.351 "state": "completed", 00:14:21.351 "digest": "sha384", 00:14:21.351 "dhgroup": "ffdhe4096" 00:14:21.351 } 00:14:21.351 } 00:14:21.351 ]' 00:14:21.351 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.351 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:21.351 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.351 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:21.351 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.610 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.610 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.610 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.869 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:14:21.869 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:14:22.435 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.694 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:22.694 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.694 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.694 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.694 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.694 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:22.694 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:22.953 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:14:22.953 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.953 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:22.953 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:22.953 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:22.953 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.953 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.953 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.953 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.953 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.953 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.953 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.953 01:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.212 00:14:23.212 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.212 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.212 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.470 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.470 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.470 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.470 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.729 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.729 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.729 { 00:14:23.729 "cntlid": 77, 00:14:23.729 "qid": 0, 00:14:23.729 "state": "enabled", 00:14:23.729 "thread": "nvmf_tgt_poll_group_000", 00:14:23.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:23.729 "listen_address": { 00:14:23.729 "trtype": "TCP", 00:14:23.729 "adrfam": "IPv4", 00:14:23.729 "traddr": "10.0.0.3", 00:14:23.729 "trsvcid": "4420" 00:14:23.729 }, 00:14:23.729 "peer_address": { 00:14:23.729 "trtype": "TCP", 00:14:23.729 "adrfam": "IPv4", 00:14:23.729 "traddr": "10.0.0.1", 00:14:23.729 "trsvcid": "56020" 00:14:23.729 }, 00:14:23.729 "auth": { 00:14:23.729 "state": "completed", 00:14:23.729 "digest": "sha384", 00:14:23.729 "dhgroup": "ffdhe4096" 00:14:23.729 } 00:14:23.729 } 00:14:23.729 ]' 00:14:23.729 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.729 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:23.729 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.729 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:23.729 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.729 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.729 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.729 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.988 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:14:23.988 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:14:24.928 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.928 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:24.928 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.928 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.928 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.928 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.928 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:24.928 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:25.187 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:14:25.187 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.187 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:25.187 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:25.187 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:25.187 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.187 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:14:25.187 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.187 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.187 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.187 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:25.187 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:25.187 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:25.472 00:14:25.472 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.472 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.472 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.041 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.041 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.041 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.041 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.041 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.041 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.041 { 00:14:26.041 "cntlid": 79, 00:14:26.041 "qid": 0, 00:14:26.041 "state": "enabled", 00:14:26.041 "thread": "nvmf_tgt_poll_group_000", 00:14:26.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:26.041 "listen_address": { 00:14:26.041 "trtype": "TCP", 00:14:26.041 "adrfam": "IPv4", 00:14:26.041 "traddr": "10.0.0.3", 00:14:26.041 "trsvcid": "4420" 00:14:26.041 }, 00:14:26.041 "peer_address": { 00:14:26.041 "trtype": "TCP", 00:14:26.041 "adrfam": "IPv4", 00:14:26.041 "traddr": "10.0.0.1", 00:14:26.041 "trsvcid": "42924" 00:14:26.041 }, 00:14:26.041 "auth": { 00:14:26.041 "state": "completed", 00:14:26.041 "digest": "sha384", 00:14:26.041 "dhgroup": "ffdhe4096" 00:14:26.041 } 00:14:26.041 } 00:14:26.041 ]' 00:14:26.041 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.041 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:26.041 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.041 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:26.041 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.041 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.041 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.041 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.300 01:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:14:26.300 01:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:14:27.238 01:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.238 01:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:27.238 01:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.238 01:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.238 01:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.238 01:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:27.238 01:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.238 01:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:27.238 01:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:27.238 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:14:27.238 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.238 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:27.238 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:27.238 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:27.238 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.238 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.238 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.238 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.238 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.238 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.238 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.238 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.807 00:14:27.807 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.807 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.807 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.066 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.066 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.066 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.066 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.066 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.066 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.066 { 00:14:28.066 "cntlid": 81, 00:14:28.066 "qid": 0, 00:14:28.066 "state": "enabled", 00:14:28.066 "thread": "nvmf_tgt_poll_group_000", 00:14:28.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:28.066 "listen_address": { 00:14:28.066 "trtype": "TCP", 00:14:28.066 "adrfam": "IPv4", 00:14:28.066 "traddr": "10.0.0.3", 00:14:28.066 "trsvcid": "4420" 00:14:28.066 }, 00:14:28.066 "peer_address": { 00:14:28.066 "trtype": "TCP", 00:14:28.066 "adrfam": "IPv4", 00:14:28.066 "traddr": "10.0.0.1", 00:14:28.066 "trsvcid": "42950" 00:14:28.066 }, 00:14:28.066 "auth": { 00:14:28.066 "state": "completed", 00:14:28.066 "digest": "sha384", 00:14:28.066 "dhgroup": "ffdhe6144" 00:14:28.066 } 00:14:28.066 } 00:14:28.066 ]' 00:14:28.066 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.066 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:28.066 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.326 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:28.326 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.326 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.326 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.326 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.586 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:14:28.586 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:14:29.155 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.155 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:29.155 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.155 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.155 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.155 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.155 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:29.155 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:29.414 01:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:14:29.414 01:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.414 01:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:29.414 01:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:29.414 01:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:29.414 01:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.414 01:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.414 01:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.414 01:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.414 01:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.414 01:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.414 01:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.414 01:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.981 00:14:29.981 01:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.981 01:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.981 01:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.238 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.238 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.238 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.238 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.238 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.238 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.238 { 00:14:30.238 "cntlid": 83, 00:14:30.238 "qid": 0, 00:14:30.238 "state": "enabled", 00:14:30.238 "thread": "nvmf_tgt_poll_group_000", 00:14:30.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:30.238 "listen_address": { 00:14:30.238 "trtype": "TCP", 00:14:30.238 "adrfam": "IPv4", 00:14:30.238 "traddr": "10.0.0.3", 00:14:30.238 "trsvcid": "4420" 00:14:30.238 }, 00:14:30.238 "peer_address": { 00:14:30.238 "trtype": "TCP", 00:14:30.238 "adrfam": "IPv4", 00:14:30.238 "traddr": "10.0.0.1", 00:14:30.238 "trsvcid": "42976" 00:14:30.238 }, 00:14:30.238 "auth": { 00:14:30.238 "state": "completed", 00:14:30.238 "digest": "sha384", 00:14:30.238 "dhgroup": "ffdhe6144" 00:14:30.238 } 00:14:30.238 } 00:14:30.238 ]' 00:14:30.238 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.238 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:30.238 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.238 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:30.238 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.238 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.238 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.238 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.806 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:14:30.806 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:14:31.375 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.375 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:31.375 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.375 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.375 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.375 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.375 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:31.375 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:31.633 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:14:31.633 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.633 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:31.633 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:31.633 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:31.633 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.633 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.633 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.633 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.633 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.633 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.633 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.633 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.199 00:14:32.199 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.199 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.199 01:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.458 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.458 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.458 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.458 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.458 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.458 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.458 { 00:14:32.458 "cntlid": 85, 00:14:32.458 "qid": 0, 00:14:32.458 "state": "enabled", 00:14:32.458 "thread": "nvmf_tgt_poll_group_000", 00:14:32.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:32.458 "listen_address": { 00:14:32.458 "trtype": "TCP", 00:14:32.458 "adrfam": "IPv4", 00:14:32.458 "traddr": "10.0.0.3", 00:14:32.458 "trsvcid": "4420" 00:14:32.458 }, 00:14:32.458 "peer_address": { 00:14:32.458 "trtype": "TCP", 00:14:32.458 "adrfam": "IPv4", 00:14:32.458 "traddr": "10.0.0.1", 00:14:32.458 "trsvcid": "43008" 00:14:32.458 }, 00:14:32.458 "auth": { 00:14:32.458 "state": "completed", 00:14:32.458 "digest": "sha384", 00:14:32.458 "dhgroup": "ffdhe6144" 00:14:32.458 } 00:14:32.458 } 00:14:32.458 ]' 00:14:32.458 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.458 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:32.458 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.458 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:32.458 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.458 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.458 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.458 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.717 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:14:32.717 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:14:33.653 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.653 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:33.653 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.653 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.653 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.653 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.653 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:33.653 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:33.912 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:33.912 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.912 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:33.912 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:33.912 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:33.912 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.912 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:14:33.912 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.912 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.912 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.912 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:33.912 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:33.912 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:34.482 00:14:34.482 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.482 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.482 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.748 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.748 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.748 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.748 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.748 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.748 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.748 { 00:14:34.748 "cntlid": 87, 00:14:34.748 "qid": 0, 00:14:34.748 "state": "enabled", 00:14:34.748 "thread": "nvmf_tgt_poll_group_000", 00:14:34.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:34.748 "listen_address": { 00:14:34.748 "trtype": "TCP", 00:14:34.748 "adrfam": "IPv4", 00:14:34.748 "traddr": "10.0.0.3", 00:14:34.748 "trsvcid": "4420" 00:14:34.748 }, 00:14:34.748 "peer_address": { 00:14:34.748 "trtype": "TCP", 00:14:34.748 "adrfam": "IPv4", 00:14:34.748 "traddr": "10.0.0.1", 00:14:34.748 "trsvcid": "43038" 00:14:34.748 }, 00:14:34.748 "auth": { 00:14:34.748 "state": "completed", 00:14:34.748 "digest": "sha384", 00:14:34.748 "dhgroup": "ffdhe6144" 00:14:34.748 } 00:14:34.748 } 00:14:34.748 ]' 00:14:34.749 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.749 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:34.749 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.749 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:34.749 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.749 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.749 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.749 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.007 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:14:35.007 01:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:14:35.575 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.575 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:35.575 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.576 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.576 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.576 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:35.576 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.576 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:35.576 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:36.144 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:36.144 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.144 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:36.144 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:36.144 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:36.144 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.144 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.144 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.144 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.144 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.144 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.144 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.144 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.712 00:14:36.712 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.712 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.712 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.972 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.972 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.972 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.972 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.972 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.972 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.972 { 00:14:36.972 "cntlid": 89, 00:14:36.972 "qid": 0, 00:14:36.972 "state": "enabled", 00:14:36.972 "thread": "nvmf_tgt_poll_group_000", 00:14:36.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:36.972 "listen_address": { 00:14:36.972 "trtype": "TCP", 00:14:36.972 "adrfam": "IPv4", 00:14:36.972 "traddr": "10.0.0.3", 00:14:36.972 "trsvcid": "4420" 00:14:36.972 }, 00:14:36.972 "peer_address": { 00:14:36.972 "trtype": "TCP", 00:14:36.972 "adrfam": "IPv4", 00:14:36.972 "traddr": "10.0.0.1", 00:14:36.972 "trsvcid": "32862" 00:14:36.972 }, 00:14:36.972 "auth": { 00:14:36.972 "state": "completed", 00:14:36.972 "digest": "sha384", 00:14:36.972 "dhgroup": "ffdhe8192" 00:14:36.972 } 00:14:36.972 } 00:14:36.972 ]' 00:14:36.972 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.972 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:36.972 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.972 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:36.972 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.231 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.231 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.231 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.231 01:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:14:37.231 01:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:14:38.169 01:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.169 01:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:38.169 01:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.169 01:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.169 01:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.169 01:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.169 01:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:38.169 01:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:38.428 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:38.428 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.428 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:38.428 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:38.428 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:38.428 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.428 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.428 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.428 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.428 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.428 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.428 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.428 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.997 00:14:38.997 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.997 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.997 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.256 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.256 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.256 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.256 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.256 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.256 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.256 { 00:14:39.256 "cntlid": 91, 00:14:39.256 "qid": 0, 00:14:39.256 "state": "enabled", 00:14:39.256 "thread": "nvmf_tgt_poll_group_000", 00:14:39.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:39.256 "listen_address": { 00:14:39.256 "trtype": "TCP", 00:14:39.256 "adrfam": "IPv4", 00:14:39.256 "traddr": "10.0.0.3", 00:14:39.256 "trsvcid": "4420" 00:14:39.256 }, 00:14:39.256 "peer_address": { 00:14:39.256 "trtype": "TCP", 00:14:39.256 "adrfam": "IPv4", 00:14:39.256 "traddr": "10.0.0.1", 00:14:39.256 "trsvcid": "32896" 00:14:39.256 }, 00:14:39.256 "auth": { 00:14:39.256 "state": "completed", 00:14:39.256 "digest": "sha384", 00:14:39.256 "dhgroup": "ffdhe8192" 00:14:39.256 } 00:14:39.256 } 00:14:39.256 ]' 00:14:39.256 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.256 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:39.256 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.515 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:39.515 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.515 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.515 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.515 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.774 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:14:39.774 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:14:40.343 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.343 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:40.343 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.343 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.343 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.343 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.343 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:40.343 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:40.602 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:40.602 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.602 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:40.602 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:40.602 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:40.602 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.602 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.602 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.602 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.602 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.602 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.602 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.602 01:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.539 00:14:41.539 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.539 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.539 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.539 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.539 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.539 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.539 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.539 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.539 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.539 { 00:14:41.539 "cntlid": 93, 00:14:41.539 "qid": 0, 00:14:41.539 "state": "enabled", 00:14:41.539 "thread": "nvmf_tgt_poll_group_000", 00:14:41.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:41.539 "listen_address": { 00:14:41.539 "trtype": "TCP", 00:14:41.539 "adrfam": "IPv4", 00:14:41.539 "traddr": "10.0.0.3", 00:14:41.539 "trsvcid": "4420" 00:14:41.539 }, 00:14:41.539 "peer_address": { 00:14:41.539 "trtype": "TCP", 00:14:41.539 "adrfam": "IPv4", 00:14:41.539 "traddr": "10.0.0.1", 00:14:41.539 "trsvcid": "32922" 00:14:41.539 }, 00:14:41.539 "auth": { 00:14:41.539 "state": "completed", 00:14:41.539 "digest": "sha384", 00:14:41.539 "dhgroup": "ffdhe8192" 00:14:41.539 } 00:14:41.539 } 00:14:41.539 ]' 00:14:41.539 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.798 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:41.798 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.798 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:41.798 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.799 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.799 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.799 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.057 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:14:42.057 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:14:42.624 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.625 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:42.625 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.625 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.625 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.625 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.625 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:42.625 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:43.193 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:43.193 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.193 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:43.193 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:43.193 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:43.193 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.193 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:14:43.193 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.193 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.193 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.193 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:43.193 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.193 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.762 00:14:43.762 01:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.762 01:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.762 01:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.021 01:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.021 01:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.021 01:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.021 01:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.021 01:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.021 01:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.021 { 00:14:44.021 "cntlid": 95, 00:14:44.021 "qid": 0, 00:14:44.021 "state": "enabled", 00:14:44.021 "thread": "nvmf_tgt_poll_group_000", 00:14:44.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:44.021 "listen_address": { 00:14:44.021 "trtype": "TCP", 00:14:44.021 "adrfam": "IPv4", 00:14:44.021 "traddr": "10.0.0.3", 00:14:44.021 "trsvcid": "4420" 00:14:44.021 }, 00:14:44.021 "peer_address": { 00:14:44.021 "trtype": "TCP", 00:14:44.021 "adrfam": "IPv4", 00:14:44.021 "traddr": "10.0.0.1", 00:14:44.021 "trsvcid": "32952" 00:14:44.021 }, 00:14:44.021 "auth": { 00:14:44.021 "state": "completed", 00:14:44.021 "digest": "sha384", 00:14:44.021 "dhgroup": "ffdhe8192" 00:14:44.021 } 00:14:44.021 } 00:14:44.021 ]' 00:14:44.021 01:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.021 01:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:44.021 01:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.021 01:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:44.021 01:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.021 01:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.021 01:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.021 01:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.589 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:14:44.589 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:14:45.157 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.157 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:45.157 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.157 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.157 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.157 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:45.157 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:45.157 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.157 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:45.157 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:45.416 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:45.416 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.416 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:45.416 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:45.416 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:45.416 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.416 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.416 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.416 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.417 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.417 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.417 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.417 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.675 00:14:45.675 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.675 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.675 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.244 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.244 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.244 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.244 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.244 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.244 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.244 { 00:14:46.244 "cntlid": 97, 00:14:46.244 "qid": 0, 00:14:46.244 "state": "enabled", 00:14:46.244 "thread": "nvmf_tgt_poll_group_000", 00:14:46.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:46.244 "listen_address": { 00:14:46.244 "trtype": "TCP", 00:14:46.244 "adrfam": "IPv4", 00:14:46.244 "traddr": "10.0.0.3", 00:14:46.244 "trsvcid": "4420" 00:14:46.244 }, 00:14:46.244 "peer_address": { 00:14:46.244 "trtype": "TCP", 00:14:46.244 "adrfam": "IPv4", 00:14:46.244 "traddr": "10.0.0.1", 00:14:46.244 "trsvcid": "59946" 00:14:46.244 }, 00:14:46.244 "auth": { 00:14:46.244 "state": "completed", 00:14:46.244 "digest": "sha512", 00:14:46.244 "dhgroup": "null" 00:14:46.244 } 00:14:46.244 } 00:14:46.244 ]' 00:14:46.244 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.244 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.244 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.244 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:46.244 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.244 01:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.244 01:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.244 01:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.503 01:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:14:46.503 01:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:14:47.070 01:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.070 01:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:47.071 01:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.071 01:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.071 01:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.071 01:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.071 01:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:47.071 01:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:47.330 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:47.330 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.330 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:47.330 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:47.330 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:47.330 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.330 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.330 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.330 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.330 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.330 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.700 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.700 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.700 00:14:47.700 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.700 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.700 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.269 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.269 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.269 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.269 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.269 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.269 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.269 { 00:14:48.269 "cntlid": 99, 00:14:48.269 "qid": 0, 00:14:48.269 "state": "enabled", 00:14:48.269 "thread": "nvmf_tgt_poll_group_000", 00:14:48.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:48.269 "listen_address": { 00:14:48.269 "trtype": "TCP", 00:14:48.269 "adrfam": "IPv4", 00:14:48.269 "traddr": "10.0.0.3", 00:14:48.269 "trsvcid": "4420" 00:14:48.269 }, 00:14:48.269 "peer_address": { 00:14:48.269 "trtype": "TCP", 00:14:48.269 "adrfam": "IPv4", 00:14:48.269 "traddr": "10.0.0.1", 00:14:48.269 "trsvcid": "59970" 00:14:48.269 }, 00:14:48.269 "auth": { 00:14:48.269 "state": "completed", 00:14:48.269 "digest": "sha512", 00:14:48.269 "dhgroup": "null" 00:14:48.269 } 00:14:48.269 } 00:14:48.269 ]' 00:14:48.269 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.269 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.269 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.269 01:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:48.269 01:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.269 01:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.269 01:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.269 01:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.527 01:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:14:48.527 01:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:14:49.095 01:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.095 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:49.095 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.095 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.095 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.095 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.095 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:49.095 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:49.664 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:49.664 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.664 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:49.664 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:49.664 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:49.664 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.664 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.664 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.664 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.664 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.664 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.664 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.664 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.664 00:14:49.925 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.925 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.925 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.184 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.184 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.184 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.184 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.184 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.184 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.184 { 00:14:50.184 "cntlid": 101, 00:14:50.184 "qid": 0, 00:14:50.184 "state": "enabled", 00:14:50.184 "thread": "nvmf_tgt_poll_group_000", 00:14:50.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:50.184 "listen_address": { 00:14:50.184 "trtype": "TCP", 00:14:50.184 "adrfam": "IPv4", 00:14:50.184 "traddr": "10.0.0.3", 00:14:50.184 "trsvcid": "4420" 00:14:50.184 }, 00:14:50.184 "peer_address": { 00:14:50.184 "trtype": "TCP", 00:14:50.184 "adrfam": "IPv4", 00:14:50.184 "traddr": "10.0.0.1", 00:14:50.184 "trsvcid": "59990" 00:14:50.184 }, 00:14:50.184 "auth": { 00:14:50.184 "state": "completed", 00:14:50.184 "digest": "sha512", 00:14:50.184 "dhgroup": "null" 00:14:50.184 } 00:14:50.184 } 00:14:50.184 ]' 00:14:50.184 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.184 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:50.184 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.184 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:50.184 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.184 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.184 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.184 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.443 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:14:50.443 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:14:51.382 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.382 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:51.382 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.382 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.382 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.382 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.382 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:51.382 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:51.641 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:51.641 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.641 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:51.641 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:51.641 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:51.641 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.641 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:14:51.641 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.641 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.641 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.641 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:51.641 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.641 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.899 00:14:51.899 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.899 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.899 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.157 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.157 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.157 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.157 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.157 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.157 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.157 { 00:14:52.157 "cntlid": 103, 00:14:52.157 "qid": 0, 00:14:52.157 "state": "enabled", 00:14:52.157 "thread": "nvmf_tgt_poll_group_000", 00:14:52.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:52.157 "listen_address": { 00:14:52.157 "trtype": "TCP", 00:14:52.157 "adrfam": "IPv4", 00:14:52.157 "traddr": "10.0.0.3", 00:14:52.157 "trsvcid": "4420" 00:14:52.157 }, 00:14:52.157 "peer_address": { 00:14:52.157 "trtype": "TCP", 00:14:52.157 "adrfam": "IPv4", 00:14:52.157 "traddr": "10.0.0.1", 00:14:52.157 "trsvcid": "60006" 00:14:52.157 }, 00:14:52.157 "auth": { 00:14:52.157 "state": "completed", 00:14:52.157 "digest": "sha512", 00:14:52.157 "dhgroup": "null" 00:14:52.157 } 00:14:52.157 } 00:14:52.157 ]' 00:14:52.157 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.157 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:52.157 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.157 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:52.157 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.416 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.416 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.416 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.675 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:14:52.675 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:14:53.243 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.243 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:53.243 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.243 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.243 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.243 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:53.243 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.243 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:53.243 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:53.502 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:53.502 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.502 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:53.502 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:53.502 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:53.502 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.502 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.502 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.502 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.502 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.502 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.502 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.502 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.069 00:14:54.069 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.069 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.069 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.069 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.069 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.069 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.069 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.069 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.069 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.069 { 00:14:54.069 "cntlid": 105, 00:14:54.069 "qid": 0, 00:14:54.069 "state": "enabled", 00:14:54.069 "thread": "nvmf_tgt_poll_group_000", 00:14:54.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:54.069 "listen_address": { 00:14:54.069 "trtype": "TCP", 00:14:54.069 "adrfam": "IPv4", 00:14:54.069 "traddr": "10.0.0.3", 00:14:54.069 "trsvcid": "4420" 00:14:54.069 }, 00:14:54.069 "peer_address": { 00:14:54.069 "trtype": "TCP", 00:14:54.069 "adrfam": "IPv4", 00:14:54.069 "traddr": "10.0.0.1", 00:14:54.069 "trsvcid": "60030" 00:14:54.069 }, 00:14:54.069 "auth": { 00:14:54.069 "state": "completed", 00:14:54.069 "digest": "sha512", 00:14:54.069 "dhgroup": "ffdhe2048" 00:14:54.069 } 00:14:54.069 } 00:14:54.069 ]' 00:14:54.069 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.328 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.328 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.328 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:54.328 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.328 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.328 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.328 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.586 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:14:54.586 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.521 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.088 00:14:56.088 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.088 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.088 01:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.347 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.347 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.347 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.347 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.347 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.347 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.347 { 00:14:56.347 "cntlid": 107, 00:14:56.347 "qid": 0, 00:14:56.347 "state": "enabled", 00:14:56.347 "thread": "nvmf_tgt_poll_group_000", 00:14:56.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:56.347 "listen_address": { 00:14:56.347 "trtype": "TCP", 00:14:56.347 "adrfam": "IPv4", 00:14:56.347 "traddr": "10.0.0.3", 00:14:56.347 "trsvcid": "4420" 00:14:56.347 }, 00:14:56.347 "peer_address": { 00:14:56.347 "trtype": "TCP", 00:14:56.347 "adrfam": "IPv4", 00:14:56.347 "traddr": "10.0.0.1", 00:14:56.347 "trsvcid": "49986" 00:14:56.347 }, 00:14:56.347 "auth": { 00:14:56.347 "state": "completed", 00:14:56.347 "digest": "sha512", 00:14:56.347 "dhgroup": "ffdhe2048" 00:14:56.347 } 00:14:56.347 } 00:14:56.347 ]' 00:14:56.347 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.347 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:56.347 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.347 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:56.347 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.605 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.605 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.605 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.606 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:14:56.606 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:14:57.542 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.542 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:57.542 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.542 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.542 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.542 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.542 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:57.542 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:57.801 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:57.801 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.801 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:57.801 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:57.801 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:57.802 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.802 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.802 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.802 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.802 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.802 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.802 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.802 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.061 00:14:58.061 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.061 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.061 01:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.321 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.321 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.321 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.321 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.321 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.321 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.321 { 00:14:58.321 "cntlid": 109, 00:14:58.321 "qid": 0, 00:14:58.321 "state": "enabled", 00:14:58.321 "thread": "nvmf_tgt_poll_group_000", 00:14:58.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:14:58.321 "listen_address": { 00:14:58.321 "trtype": "TCP", 00:14:58.321 "adrfam": "IPv4", 00:14:58.321 "traddr": "10.0.0.3", 00:14:58.321 "trsvcid": "4420" 00:14:58.321 }, 00:14:58.321 "peer_address": { 00:14:58.321 "trtype": "TCP", 00:14:58.321 "adrfam": "IPv4", 00:14:58.321 "traddr": "10.0.0.1", 00:14:58.321 "trsvcid": "50018" 00:14:58.321 }, 00:14:58.321 "auth": { 00:14:58.321 "state": "completed", 00:14:58.321 "digest": "sha512", 00:14:58.321 "dhgroup": "ffdhe2048" 00:14:58.321 } 00:14:58.321 } 00:14:58.321 ]' 00:14:58.321 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.321 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:58.321 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.580 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:58.580 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.580 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.580 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.580 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.838 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:14:58.838 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:59.774 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.775 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:00.341 00:15:00.341 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.341 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.341 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.599 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.599 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.599 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.599 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.599 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.599 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.599 { 00:15:00.599 "cntlid": 111, 00:15:00.599 "qid": 0, 00:15:00.599 "state": "enabled", 00:15:00.599 "thread": "nvmf_tgt_poll_group_000", 00:15:00.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:00.599 "listen_address": { 00:15:00.599 "trtype": "TCP", 00:15:00.599 "adrfam": "IPv4", 00:15:00.599 "traddr": "10.0.0.3", 00:15:00.599 "trsvcid": "4420" 00:15:00.599 }, 00:15:00.599 "peer_address": { 00:15:00.599 "trtype": "TCP", 00:15:00.599 "adrfam": "IPv4", 00:15:00.599 "traddr": "10.0.0.1", 00:15:00.599 "trsvcid": "50036" 00:15:00.599 }, 00:15:00.599 "auth": { 00:15:00.599 "state": "completed", 00:15:00.599 "digest": "sha512", 00:15:00.599 "dhgroup": "ffdhe2048" 00:15:00.599 } 00:15:00.599 } 00:15:00.599 ]' 00:15:00.599 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.599 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:00.599 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.599 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:00.599 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.599 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.599 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.599 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.859 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:15:00.859 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:15:01.795 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.795 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:01.795 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.795 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.795 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.795 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.795 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.795 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:01.795 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:02.054 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:15:02.054 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.054 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:02.054 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:02.054 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:02.054 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.054 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.054 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.054 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.054 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.054 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.054 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.054 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.313 00:15:02.313 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.313 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.313 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.571 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.571 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.571 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.571 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.571 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.571 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.571 { 00:15:02.571 "cntlid": 113, 00:15:02.571 "qid": 0, 00:15:02.571 "state": "enabled", 00:15:02.571 "thread": "nvmf_tgt_poll_group_000", 00:15:02.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:02.571 "listen_address": { 00:15:02.571 "trtype": "TCP", 00:15:02.571 "adrfam": "IPv4", 00:15:02.571 "traddr": "10.0.0.3", 00:15:02.571 "trsvcid": "4420" 00:15:02.571 }, 00:15:02.571 "peer_address": { 00:15:02.571 "trtype": "TCP", 00:15:02.571 "adrfam": "IPv4", 00:15:02.571 "traddr": "10.0.0.1", 00:15:02.571 "trsvcid": "50054" 00:15:02.571 }, 00:15:02.571 "auth": { 00:15:02.571 "state": "completed", 00:15:02.571 "digest": "sha512", 00:15:02.571 "dhgroup": "ffdhe3072" 00:15:02.571 } 00:15:02.571 } 00:15:02.571 ]' 00:15:02.571 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.830 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:02.830 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.830 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:02.830 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.830 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.830 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.830 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.089 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:15:03.089 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:15:03.656 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.925 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:03.925 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.925 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.925 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.925 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.925 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:03.925 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:04.196 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:15:04.196 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.196 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:04.196 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:04.196 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:04.196 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.196 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.196 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.196 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.196 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.196 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.196 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.196 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.455 00:15:04.455 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.455 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.455 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.713 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.713 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.713 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.713 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.713 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.713 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.713 { 00:15:04.713 "cntlid": 115, 00:15:04.713 "qid": 0, 00:15:04.713 "state": "enabled", 00:15:04.713 "thread": "nvmf_tgt_poll_group_000", 00:15:04.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:04.713 "listen_address": { 00:15:04.713 "trtype": "TCP", 00:15:04.713 "adrfam": "IPv4", 00:15:04.713 "traddr": "10.0.0.3", 00:15:04.713 "trsvcid": "4420" 00:15:04.713 }, 00:15:04.713 "peer_address": { 00:15:04.713 "trtype": "TCP", 00:15:04.713 "adrfam": "IPv4", 00:15:04.713 "traddr": "10.0.0.1", 00:15:04.713 "trsvcid": "50086" 00:15:04.713 }, 00:15:04.713 "auth": { 00:15:04.713 "state": "completed", 00:15:04.713 "digest": "sha512", 00:15:04.713 "dhgroup": "ffdhe3072" 00:15:04.713 } 00:15:04.714 } 00:15:04.714 ]' 00:15:04.714 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.714 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:04.714 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.972 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:04.972 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.972 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.972 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.972 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.230 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:15:05.230 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:15:05.798 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.798 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:05.799 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.799 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.799 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.799 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.799 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:05.799 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:06.057 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:15:06.057 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.057 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:06.057 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:06.057 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:06.057 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.057 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.057 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.057 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.057 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.057 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.057 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.057 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.316 00:15:06.575 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.575 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.575 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.835 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.835 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.835 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.835 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.835 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.835 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.835 { 00:15:06.835 "cntlid": 117, 00:15:06.835 "qid": 0, 00:15:06.835 "state": "enabled", 00:15:06.835 "thread": "nvmf_tgt_poll_group_000", 00:15:06.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:06.835 "listen_address": { 00:15:06.835 "trtype": "TCP", 00:15:06.835 "adrfam": "IPv4", 00:15:06.835 "traddr": "10.0.0.3", 00:15:06.835 "trsvcid": "4420" 00:15:06.835 }, 00:15:06.835 "peer_address": { 00:15:06.835 "trtype": "TCP", 00:15:06.835 "adrfam": "IPv4", 00:15:06.835 "traddr": "10.0.0.1", 00:15:06.835 "trsvcid": "60114" 00:15:06.835 }, 00:15:06.835 "auth": { 00:15:06.835 "state": "completed", 00:15:06.835 "digest": "sha512", 00:15:06.835 "dhgroup": "ffdhe3072" 00:15:06.835 } 00:15:06.835 } 00:15:06.835 ]' 00:15:06.835 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.835 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:06.835 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.835 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:06.835 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.835 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.835 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.835 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.095 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:15:07.095 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.032 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.600 00:15:08.600 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.600 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.600 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.859 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.859 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.859 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.859 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.859 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.859 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.859 { 00:15:08.859 "cntlid": 119, 00:15:08.859 "qid": 0, 00:15:08.859 "state": "enabled", 00:15:08.859 "thread": "nvmf_tgt_poll_group_000", 00:15:08.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:08.859 "listen_address": { 00:15:08.859 "trtype": "TCP", 00:15:08.859 "adrfam": "IPv4", 00:15:08.859 "traddr": "10.0.0.3", 00:15:08.859 "trsvcid": "4420" 00:15:08.859 }, 00:15:08.859 "peer_address": { 00:15:08.859 "trtype": "TCP", 00:15:08.859 "adrfam": "IPv4", 00:15:08.859 "traddr": "10.0.0.1", 00:15:08.859 "trsvcid": "60142" 00:15:08.859 }, 00:15:08.859 "auth": { 00:15:08.859 "state": "completed", 00:15:08.859 "digest": "sha512", 00:15:08.859 "dhgroup": "ffdhe3072" 00:15:08.859 } 00:15:08.859 } 00:15:08.859 ]' 00:15:08.859 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.859 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:08.859 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.859 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:08.859 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.859 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.859 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.859 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.118 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:15:09.119 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.078 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.646 00:15:10.646 01:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.646 01:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.646 01:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.906 01:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.906 01:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.906 01:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.906 01:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.906 01:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.906 01:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.906 { 00:15:10.906 "cntlid": 121, 00:15:10.906 "qid": 0, 00:15:10.906 "state": "enabled", 00:15:10.906 "thread": "nvmf_tgt_poll_group_000", 00:15:10.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:10.906 "listen_address": { 00:15:10.906 "trtype": "TCP", 00:15:10.906 "adrfam": "IPv4", 00:15:10.906 "traddr": "10.0.0.3", 00:15:10.906 "trsvcid": "4420" 00:15:10.906 }, 00:15:10.906 "peer_address": { 00:15:10.906 "trtype": "TCP", 00:15:10.906 "adrfam": "IPv4", 00:15:10.906 "traddr": "10.0.0.1", 00:15:10.906 "trsvcid": "60158" 00:15:10.906 }, 00:15:10.906 "auth": { 00:15:10.906 "state": "completed", 00:15:10.906 "digest": "sha512", 00:15:10.906 "dhgroup": "ffdhe4096" 00:15:10.906 } 00:15:10.906 } 00:15:10.906 ]' 00:15:10.906 01:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.906 01:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:10.906 01:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.906 01:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:10.906 01:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.165 01:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.165 01:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.165 01:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.423 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:15:11.423 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:15:11.991 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.991 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:11.991 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.991 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.991 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.991 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.991 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:11.991 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:12.251 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:15:12.251 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.251 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:12.251 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:12.251 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:12.251 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.251 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.251 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.251 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.251 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.251 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.251 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.251 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.509 00:15:12.510 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.510 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.510 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.105 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.105 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.105 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.105 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.105 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.105 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.105 { 00:15:13.105 "cntlid": 123, 00:15:13.105 "qid": 0, 00:15:13.105 "state": "enabled", 00:15:13.105 "thread": "nvmf_tgt_poll_group_000", 00:15:13.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:13.105 "listen_address": { 00:15:13.105 "trtype": "TCP", 00:15:13.105 "adrfam": "IPv4", 00:15:13.105 "traddr": "10.0.0.3", 00:15:13.105 "trsvcid": "4420" 00:15:13.105 }, 00:15:13.105 "peer_address": { 00:15:13.105 "trtype": "TCP", 00:15:13.105 "adrfam": "IPv4", 00:15:13.105 "traddr": "10.0.0.1", 00:15:13.105 "trsvcid": "60188" 00:15:13.105 }, 00:15:13.105 "auth": { 00:15:13.105 "state": "completed", 00:15:13.105 "digest": "sha512", 00:15:13.105 "dhgroup": "ffdhe4096" 00:15:13.105 } 00:15:13.105 } 00:15:13.105 ]' 00:15:13.105 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.105 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:13.105 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.105 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:13.105 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.105 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.105 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.105 01:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.364 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:15:13.364 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:15:14.301 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.301 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:14.301 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.301 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.301 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.301 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.301 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:14.301 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:14.559 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:15:14.559 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.559 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:14.559 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:14.559 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:14.559 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.559 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.559 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.559 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.559 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.559 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.559 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.559 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.126 00:15:15.126 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.126 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.126 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.385 01:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.385 01:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.385 01:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.385 01:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.385 01:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.385 01:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.385 { 00:15:15.385 "cntlid": 125, 00:15:15.385 "qid": 0, 00:15:15.385 "state": "enabled", 00:15:15.385 "thread": "nvmf_tgt_poll_group_000", 00:15:15.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:15.385 "listen_address": { 00:15:15.385 "trtype": "TCP", 00:15:15.385 "adrfam": "IPv4", 00:15:15.385 "traddr": "10.0.0.3", 00:15:15.385 "trsvcid": "4420" 00:15:15.385 }, 00:15:15.385 "peer_address": { 00:15:15.385 "trtype": "TCP", 00:15:15.385 "adrfam": "IPv4", 00:15:15.385 "traddr": "10.0.0.1", 00:15:15.385 "trsvcid": "60230" 00:15:15.385 }, 00:15:15.385 "auth": { 00:15:15.385 "state": "completed", 00:15:15.385 "digest": "sha512", 00:15:15.385 "dhgroup": "ffdhe4096" 00:15:15.385 } 00:15:15.385 } 00:15:15.385 ]' 00:15:15.385 01:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.385 01:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:15.385 01:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.385 01:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:15.385 01:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.385 01:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.385 01:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.385 01:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.953 01:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:15:15.953 01:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:15:16.521 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.521 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:16.521 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.521 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.521 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.521 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.521 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:16.521 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:16.779 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:15:16.780 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.780 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:16.780 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:16.780 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:16.780 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.780 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:15:16.780 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.780 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.780 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.780 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:16.780 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:16.780 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:17.347 00:15:17.348 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.348 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.348 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.607 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.607 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.607 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.607 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.607 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.607 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.607 { 00:15:17.607 "cntlid": 127, 00:15:17.607 "qid": 0, 00:15:17.607 "state": "enabled", 00:15:17.607 "thread": "nvmf_tgt_poll_group_000", 00:15:17.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:17.607 "listen_address": { 00:15:17.607 "trtype": "TCP", 00:15:17.607 "adrfam": "IPv4", 00:15:17.607 "traddr": "10.0.0.3", 00:15:17.607 "trsvcid": "4420" 00:15:17.607 }, 00:15:17.607 "peer_address": { 00:15:17.607 "trtype": "TCP", 00:15:17.607 "adrfam": "IPv4", 00:15:17.607 "traddr": "10.0.0.1", 00:15:17.607 "trsvcid": "41462" 00:15:17.607 }, 00:15:17.607 "auth": { 00:15:17.607 "state": "completed", 00:15:17.607 "digest": "sha512", 00:15:17.607 "dhgroup": "ffdhe4096" 00:15:17.607 } 00:15:17.607 } 00:15:17.607 ]' 00:15:17.607 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.607 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:17.607 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.866 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:17.866 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.866 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.866 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.866 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.123 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:15:18.123 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:15:18.691 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.691 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:18.691 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.691 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.691 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.691 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:18.691 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.691 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:18.691 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:19.260 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:19.260 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.260 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:19.260 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:19.260 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:19.260 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.260 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.260 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.260 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.260 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.260 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.260 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.260 01:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.519 00:15:19.519 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.519 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.519 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.778 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.778 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.778 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.778 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.778 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.778 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.778 { 00:15:19.778 "cntlid": 129, 00:15:19.778 "qid": 0, 00:15:19.778 "state": "enabled", 00:15:19.778 "thread": "nvmf_tgt_poll_group_000", 00:15:19.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:19.778 "listen_address": { 00:15:19.778 "trtype": "TCP", 00:15:19.778 "adrfam": "IPv4", 00:15:19.778 "traddr": "10.0.0.3", 00:15:19.778 "trsvcid": "4420" 00:15:19.778 }, 00:15:19.778 "peer_address": { 00:15:19.778 "trtype": "TCP", 00:15:19.778 "adrfam": "IPv4", 00:15:19.778 "traddr": "10.0.0.1", 00:15:19.778 "trsvcid": "41490" 00:15:19.778 }, 00:15:19.778 "auth": { 00:15:19.778 "state": "completed", 00:15:19.778 "digest": "sha512", 00:15:19.778 "dhgroup": "ffdhe6144" 00:15:19.778 } 00:15:19.778 } 00:15:19.778 ]' 00:15:19.778 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.037 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:20.037 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.037 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:20.037 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.037 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.037 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.037 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.296 01:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:15:20.296 01:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:15:21.231 01:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.231 01:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:21.231 01:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.231 01:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.231 01:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.231 01:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.231 01:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:21.231 01:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:21.490 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:15:21.490 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.490 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:21.490 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:21.490 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:21.490 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.490 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.490 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.490 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.490 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.490 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.490 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.490 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.058 00:15:22.059 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.059 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.059 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.318 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.318 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.318 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.318 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.318 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.318 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.318 { 00:15:22.318 "cntlid": 131, 00:15:22.318 "qid": 0, 00:15:22.318 "state": "enabled", 00:15:22.318 "thread": "nvmf_tgt_poll_group_000", 00:15:22.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:22.318 "listen_address": { 00:15:22.318 "trtype": "TCP", 00:15:22.318 "adrfam": "IPv4", 00:15:22.318 "traddr": "10.0.0.3", 00:15:22.318 "trsvcid": "4420" 00:15:22.318 }, 00:15:22.318 "peer_address": { 00:15:22.318 "trtype": "TCP", 00:15:22.318 "adrfam": "IPv4", 00:15:22.318 "traddr": "10.0.0.1", 00:15:22.318 "trsvcid": "41502" 00:15:22.318 }, 00:15:22.318 "auth": { 00:15:22.318 "state": "completed", 00:15:22.318 "digest": "sha512", 00:15:22.318 "dhgroup": "ffdhe6144" 00:15:22.318 } 00:15:22.318 } 00:15:22.318 ]' 00:15:22.318 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.318 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:22.318 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.318 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:22.318 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.318 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.318 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.318 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.887 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:15:22.887 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:15:23.455 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.455 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:23.455 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.455 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.455 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.455 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.455 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:23.455 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:23.714 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:15:23.714 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.714 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:23.714 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:23.714 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:23.714 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.714 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.714 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.714 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.714 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.714 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.714 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.714 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.282 00:15:24.282 01:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.282 01:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.282 01:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.851 01:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.851 01:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.851 01:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.851 01:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.851 01:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.851 01:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.851 { 00:15:24.851 "cntlid": 133, 00:15:24.851 "qid": 0, 00:15:24.851 "state": "enabled", 00:15:24.851 "thread": "nvmf_tgt_poll_group_000", 00:15:24.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:24.851 "listen_address": { 00:15:24.851 "trtype": "TCP", 00:15:24.851 "adrfam": "IPv4", 00:15:24.851 "traddr": "10.0.0.3", 00:15:24.851 "trsvcid": "4420" 00:15:24.851 }, 00:15:24.851 "peer_address": { 00:15:24.851 "trtype": "TCP", 00:15:24.851 "adrfam": "IPv4", 00:15:24.851 "traddr": "10.0.0.1", 00:15:24.851 "trsvcid": "41544" 00:15:24.851 }, 00:15:24.851 "auth": { 00:15:24.851 "state": "completed", 00:15:24.851 "digest": "sha512", 00:15:24.851 "dhgroup": "ffdhe6144" 00:15:24.851 } 00:15:24.851 } 00:15:24.851 ]' 00:15:24.851 01:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.851 01:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:24.851 01:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.851 01:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:24.851 01:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.851 01:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.851 01:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.851 01:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.110 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:15:25.110 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:15:26.048 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.048 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:26.048 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.048 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.048 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.048 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.048 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:26.048 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:26.314 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:15:26.314 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.314 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:26.314 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:26.314 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:26.314 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.315 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:15:26.315 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.315 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.315 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.315 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:26.315 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.315 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.882 00:15:26.882 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.882 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.882 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.140 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.140 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.140 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.140 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.140 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.140 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.140 { 00:15:27.140 "cntlid": 135, 00:15:27.140 "qid": 0, 00:15:27.140 "state": "enabled", 00:15:27.140 "thread": "nvmf_tgt_poll_group_000", 00:15:27.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:27.140 "listen_address": { 00:15:27.140 "trtype": "TCP", 00:15:27.140 "adrfam": "IPv4", 00:15:27.140 "traddr": "10.0.0.3", 00:15:27.140 "trsvcid": "4420" 00:15:27.140 }, 00:15:27.140 "peer_address": { 00:15:27.140 "trtype": "TCP", 00:15:27.140 "adrfam": "IPv4", 00:15:27.140 "traddr": "10.0.0.1", 00:15:27.140 "trsvcid": "58850" 00:15:27.140 }, 00:15:27.140 "auth": { 00:15:27.140 "state": "completed", 00:15:27.140 "digest": "sha512", 00:15:27.140 "dhgroup": "ffdhe6144" 00:15:27.140 } 00:15:27.140 } 00:15:27.140 ]' 00:15:27.140 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.140 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:27.140 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.140 01:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:27.140 01:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.140 01:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.140 01:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.140 01:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.707 01:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:15:27.707 01:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:15:28.643 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.644 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:28.644 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.644 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.644 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.644 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.644 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.644 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:28.644 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:28.902 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:15:28.902 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.902 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:28.903 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:28.903 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:28.903 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.903 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.903 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.903 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.903 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.903 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.903 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.903 01:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.471 00:15:29.471 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.471 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.471 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.729 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.729 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.729 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.729 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.730 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.730 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.730 { 00:15:29.730 "cntlid": 137, 00:15:29.730 "qid": 0, 00:15:29.730 "state": "enabled", 00:15:29.730 "thread": "nvmf_tgt_poll_group_000", 00:15:29.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:29.730 "listen_address": { 00:15:29.730 "trtype": "TCP", 00:15:29.730 "adrfam": "IPv4", 00:15:29.730 "traddr": "10.0.0.3", 00:15:29.730 "trsvcid": "4420" 00:15:29.730 }, 00:15:29.730 "peer_address": { 00:15:29.730 "trtype": "TCP", 00:15:29.730 "adrfam": "IPv4", 00:15:29.730 "traddr": "10.0.0.1", 00:15:29.730 "trsvcid": "58878" 00:15:29.730 }, 00:15:29.730 "auth": { 00:15:29.730 "state": "completed", 00:15:29.730 "digest": "sha512", 00:15:29.730 "dhgroup": "ffdhe8192" 00:15:29.730 } 00:15:29.730 } 00:15:29.730 ]' 00:15:29.730 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.730 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:29.730 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.988 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:29.988 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.988 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.988 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.988 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.247 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:15:30.247 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:15:31.184 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.184 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:31.184 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.184 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.184 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.184 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.184 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:31.184 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:31.184 01:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:15:31.184 01:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.184 01:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:31.184 01:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:31.184 01:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:31.184 01:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.184 01:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.184 01:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.184 01:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.184 01:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.184 01:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.184 01:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.184 01:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.121 00:15:32.121 01:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.121 01:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.121 01:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.121 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.121 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.121 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.121 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.121 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.121 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.121 { 00:15:32.121 "cntlid": 139, 00:15:32.121 "qid": 0, 00:15:32.121 "state": "enabled", 00:15:32.121 "thread": "nvmf_tgt_poll_group_000", 00:15:32.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:32.121 "listen_address": { 00:15:32.121 "trtype": "TCP", 00:15:32.121 "adrfam": "IPv4", 00:15:32.121 "traddr": "10.0.0.3", 00:15:32.121 "trsvcid": "4420" 00:15:32.121 }, 00:15:32.121 "peer_address": { 00:15:32.121 "trtype": "TCP", 00:15:32.121 "adrfam": "IPv4", 00:15:32.121 "traddr": "10.0.0.1", 00:15:32.121 "trsvcid": "58914" 00:15:32.121 }, 00:15:32.121 "auth": { 00:15:32.121 "state": "completed", 00:15:32.121 "digest": "sha512", 00:15:32.121 "dhgroup": "ffdhe8192" 00:15:32.121 } 00:15:32.121 } 00:15:32.121 ]' 00:15:32.121 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.380 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:32.380 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.380 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:32.380 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.380 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.380 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.380 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.639 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:15:32.639 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: --dhchap-ctrl-secret DHHC-1:02:NjIwODM5MjhhMjhmMDg3NTczY2E1M2U2YTY4MTkzNTUwNmZiNWY4MzM3NGNkMzY2hiw/DQ==: 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.576 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.512 00:15:34.512 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.512 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.512 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.771 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.771 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.771 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.771 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.771 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.771 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.771 { 00:15:34.771 "cntlid": 141, 00:15:34.771 "qid": 0, 00:15:34.771 "state": "enabled", 00:15:34.771 "thread": "nvmf_tgt_poll_group_000", 00:15:34.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:34.771 "listen_address": { 00:15:34.771 "trtype": "TCP", 00:15:34.771 "adrfam": "IPv4", 00:15:34.771 "traddr": "10.0.0.3", 00:15:34.771 "trsvcid": "4420" 00:15:34.771 }, 00:15:34.771 "peer_address": { 00:15:34.771 "trtype": "TCP", 00:15:34.771 "adrfam": "IPv4", 00:15:34.771 "traddr": "10.0.0.1", 00:15:34.771 "trsvcid": "58946" 00:15:34.771 }, 00:15:34.771 "auth": { 00:15:34.771 "state": "completed", 00:15:34.771 "digest": "sha512", 00:15:34.771 "dhgroup": "ffdhe8192" 00:15:34.771 } 00:15:34.771 } 00:15:34.771 ]' 00:15:34.771 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.771 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:34.771 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.771 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:34.771 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.771 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.771 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.771 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.030 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:15:35.030 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:01:MGRkYmM5OWM3MzQ1NjYxNDRmZDY1MDI1M2ExM2NjY2a7gMZp: 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.968 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:36.534 00:15:36.534 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.534 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.534 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.792 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.792 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.792 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.792 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.792 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.792 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.792 { 00:15:36.792 "cntlid": 143, 00:15:36.792 "qid": 0, 00:15:36.792 "state": "enabled", 00:15:36.792 "thread": "nvmf_tgt_poll_group_000", 00:15:36.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:36.792 "listen_address": { 00:15:36.792 "trtype": "TCP", 00:15:36.792 "adrfam": "IPv4", 00:15:36.792 "traddr": "10.0.0.3", 00:15:36.792 "trsvcid": "4420" 00:15:36.792 }, 00:15:36.792 "peer_address": { 00:15:36.792 "trtype": "TCP", 00:15:36.792 "adrfam": "IPv4", 00:15:36.792 "traddr": "10.0.0.1", 00:15:36.792 "trsvcid": "43374" 00:15:36.792 }, 00:15:36.792 "auth": { 00:15:36.792 "state": "completed", 00:15:36.792 "digest": "sha512", 00:15:36.792 "dhgroup": "ffdhe8192" 00:15:36.792 } 00:15:36.792 } 00:15:36.792 ]' 00:15:37.051 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.051 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:37.051 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.051 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:37.051 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.051 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.051 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.051 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.311 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:15:37.311 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:15:38.248 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.248 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:38.248 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.248 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.248 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.248 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:38.248 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:38.248 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:38.248 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:38.248 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:38.248 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:38.248 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:38.248 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.248 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:38.248 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:38.248 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:38.248 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.248 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.248 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.248 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.248 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.248 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.248 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.248 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.816 00:15:39.074 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.074 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.074 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.333 01:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.333 01:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.333 01:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.333 01:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.333 01:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.333 01:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.333 { 00:15:39.333 "cntlid": 145, 00:15:39.333 "qid": 0, 00:15:39.333 "state": "enabled", 00:15:39.333 "thread": "nvmf_tgt_poll_group_000", 00:15:39.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:39.333 "listen_address": { 00:15:39.333 "trtype": "TCP", 00:15:39.333 "adrfam": "IPv4", 00:15:39.333 "traddr": "10.0.0.3", 00:15:39.333 "trsvcid": "4420" 00:15:39.333 }, 00:15:39.333 "peer_address": { 00:15:39.333 "trtype": "TCP", 00:15:39.333 "adrfam": "IPv4", 00:15:39.333 "traddr": "10.0.0.1", 00:15:39.333 "trsvcid": "43404" 00:15:39.333 }, 00:15:39.333 "auth": { 00:15:39.333 "state": "completed", 00:15:39.333 "digest": "sha512", 00:15:39.333 "dhgroup": "ffdhe8192" 00:15:39.333 } 00:15:39.333 } 00:15:39.333 ]' 00:15:39.333 01:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.333 01:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:39.333 01:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.333 01:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:39.333 01:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.333 01:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.333 01:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.333 01:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.592 01:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:15:39.592 01:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:00:NWVmY2VhYzZkZDE0MmY4OGIxYTJhY2IyYTAxMGRjNDZkNzdkZDM2NTk3MWM5MTY5RJ5nNg==: --dhchap-ctrl-secret DHHC-1:03:MWIxOTk4ZTA5ODlmMDNmZWQzNjc1MDU2ZGE0Njg2Yzk5M2M2ZWI1YjZjYzAwMTZiMGQ0NWFiZDFlYzgyNDg1ZIeT6F0=: 00:15:40.528 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.528 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:40.528 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.528 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.528 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.529 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 00:15:40.529 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.529 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.529 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.529 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:40.529 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:40.529 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:40.529 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:40.529 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.529 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:40.529 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.529 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:40.529 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:40.529 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:41.096 request: 00:15:41.097 { 00:15:41.097 "name": "nvme0", 00:15:41.097 "trtype": "tcp", 00:15:41.097 "traddr": "10.0.0.3", 00:15:41.097 "adrfam": "ipv4", 00:15:41.097 "trsvcid": "4420", 00:15:41.097 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:41.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:41.097 "prchk_reftag": false, 00:15:41.097 "prchk_guard": false, 00:15:41.097 "hdgst": false, 00:15:41.097 "ddgst": false, 00:15:41.097 "dhchap_key": "key2", 00:15:41.097 "allow_unrecognized_csi": false, 00:15:41.097 "method": "bdev_nvme_attach_controller", 00:15:41.097 "req_id": 1 00:15:41.097 } 00:15:41.097 Got JSON-RPC error response 00:15:41.097 response: 00:15:41.097 { 00:15:41.097 "code": -5, 00:15:41.097 "message": "Input/output error" 00:15:41.097 } 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:41.097 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:41.665 request: 00:15:41.666 { 00:15:41.666 "name": "nvme0", 00:15:41.666 "trtype": "tcp", 00:15:41.666 "traddr": "10.0.0.3", 00:15:41.666 "adrfam": "ipv4", 00:15:41.666 "trsvcid": "4420", 00:15:41.666 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:41.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:41.666 "prchk_reftag": false, 00:15:41.666 "prchk_guard": false, 00:15:41.666 "hdgst": false, 00:15:41.666 "ddgst": false, 00:15:41.666 "dhchap_key": "key1", 00:15:41.666 "dhchap_ctrlr_key": "ckey2", 00:15:41.666 "allow_unrecognized_csi": false, 00:15:41.666 "method": "bdev_nvme_attach_controller", 00:15:41.666 "req_id": 1 00:15:41.666 } 00:15:41.666 Got JSON-RPC error response 00:15:41.666 response: 00:15:41.666 { 00:15:41.666 "code": -5, 00:15:41.666 "message": "Input/output error" 00:15:41.666 } 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.666 01:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.235 request: 00:15:42.235 { 00:15:42.235 "name": "nvme0", 00:15:42.235 "trtype": "tcp", 00:15:42.235 "traddr": "10.0.0.3", 00:15:42.235 "adrfam": "ipv4", 00:15:42.235 "trsvcid": "4420", 00:15:42.235 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:42.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:42.235 "prchk_reftag": false, 00:15:42.235 "prchk_guard": false, 00:15:42.235 "hdgst": false, 00:15:42.235 "ddgst": false, 00:15:42.235 "dhchap_key": "key1", 00:15:42.235 "dhchap_ctrlr_key": "ckey1", 00:15:42.235 "allow_unrecognized_csi": false, 00:15:42.235 "method": "bdev_nvme_attach_controller", 00:15:42.235 "req_id": 1 00:15:42.235 } 00:15:42.235 Got JSON-RPC error response 00:15:42.235 response: 00:15:42.235 { 00:15:42.235 "code": -5, 00:15:42.235 "message": "Input/output error" 00:15:42.235 } 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 70019 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 70019 ']' 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 70019 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70019 00:15:42.235 killing process with pid 70019 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70019' 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 70019 00:15:42.235 01:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 70019 00:15:43.614 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:43.614 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:43.614 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:43.614 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.614 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=73087 00:15:43.614 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:43.614 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 73087 00:15:43.614 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 73087 ']' 00:15:43.614 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.614 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:43.614 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.614 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:43.614 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.552 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:44.552 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:44.552 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:44.552 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:44.552 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.552 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.552 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:44.552 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 73087 00:15:44.552 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 73087 ']' 00:15:44.552 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.552 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:44.552 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.552 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:44.552 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.812 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:44.812 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:44.812 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:44.812 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.812 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.071 null0 00:15:45.071 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.071 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:45.071 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GtR 00:15:45.071 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.071 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.fPj ]] 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fPj 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dCe 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.kfP ]] 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kfP 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xbH 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.t4Y ]] 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t4Y 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.xPJ 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.331 01:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.269 nvme0n1 00:15:46.269 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.269 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.269 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.527 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.527 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.527 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.527 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.527 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.527 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.527 { 00:15:46.527 "cntlid": 1, 00:15:46.527 "qid": 0, 00:15:46.527 "state": "enabled", 00:15:46.527 "thread": "nvmf_tgt_poll_group_000", 00:15:46.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:46.527 "listen_address": { 00:15:46.527 "trtype": "TCP", 00:15:46.527 "adrfam": "IPv4", 00:15:46.528 "traddr": "10.0.0.3", 00:15:46.528 "trsvcid": "4420" 00:15:46.528 }, 00:15:46.528 "peer_address": { 00:15:46.528 "trtype": "TCP", 00:15:46.528 "adrfam": "IPv4", 00:15:46.528 "traddr": "10.0.0.1", 00:15:46.528 "trsvcid": "45718" 00:15:46.528 }, 00:15:46.528 "auth": { 00:15:46.528 "state": "completed", 00:15:46.528 "digest": "sha512", 00:15:46.528 "dhgroup": "ffdhe8192" 00:15:46.528 } 00:15:46.528 } 00:15:46.528 ]' 00:15:46.528 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.528 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:46.528 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.787 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:46.787 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.787 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.787 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.787 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.047 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:15:47.047 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:15:47.614 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.873 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:47.873 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.873 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.873 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.873 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key3 00:15:47.873 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.873 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.873 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.873 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:47.873 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:48.133 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:48.133 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:48.133 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:48.133 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:48.133 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.133 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:48.133 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.133 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:48.133 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.133 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.392 request: 00:15:48.392 { 00:15:48.392 "name": "nvme0", 00:15:48.392 "trtype": "tcp", 00:15:48.392 "traddr": "10.0.0.3", 00:15:48.392 "adrfam": "ipv4", 00:15:48.392 "trsvcid": "4420", 00:15:48.392 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:48.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:48.392 "prchk_reftag": false, 00:15:48.392 "prchk_guard": false, 00:15:48.392 "hdgst": false, 00:15:48.392 "ddgst": false, 00:15:48.392 "dhchap_key": "key3", 00:15:48.392 "allow_unrecognized_csi": false, 00:15:48.392 "method": "bdev_nvme_attach_controller", 00:15:48.392 "req_id": 1 00:15:48.392 } 00:15:48.392 Got JSON-RPC error response 00:15:48.392 response: 00:15:48.392 { 00:15:48.392 "code": -5, 00:15:48.392 "message": "Input/output error" 00:15:48.392 } 00:15:48.393 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:48.393 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:48.393 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:48.393 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:48.393 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:48.393 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:48.393 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:48.393 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:48.652 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:48.652 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:48.652 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:48.652 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:48.652 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.652 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:48.652 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.652 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:48.652 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.652 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.912 request: 00:15:48.912 { 00:15:48.912 "name": "nvme0", 00:15:48.912 "trtype": "tcp", 00:15:48.912 "traddr": "10.0.0.3", 00:15:48.912 "adrfam": "ipv4", 00:15:48.912 "trsvcid": "4420", 00:15:48.912 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:48.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:48.912 "prchk_reftag": false, 00:15:48.912 "prchk_guard": false, 00:15:48.912 "hdgst": false, 00:15:48.912 "ddgst": false, 00:15:48.912 "dhchap_key": "key3", 00:15:48.912 "allow_unrecognized_csi": false, 00:15:48.912 "method": "bdev_nvme_attach_controller", 00:15:48.912 "req_id": 1 00:15:48.912 } 00:15:48.912 Got JSON-RPC error response 00:15:48.912 response: 00:15:48.912 { 00:15:48.912 "code": -5, 00:15:48.912 "message": "Input/output error" 00:15:48.912 } 00:15:48.912 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:48.912 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:48.912 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:48.912 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:48.912 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:48.912 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:48.912 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:48.912 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:48.912 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:48.912 01:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:49.172 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:49.786 request: 00:15:49.786 { 00:15:49.786 "name": "nvme0", 00:15:49.786 "trtype": "tcp", 00:15:49.786 "traddr": "10.0.0.3", 00:15:49.786 "adrfam": "ipv4", 00:15:49.786 "trsvcid": "4420", 00:15:49.786 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:49.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:49.786 "prchk_reftag": false, 00:15:49.786 "prchk_guard": false, 00:15:49.786 "hdgst": false, 00:15:49.786 "ddgst": false, 00:15:49.786 "dhchap_key": "key0", 00:15:49.786 "dhchap_ctrlr_key": "key1", 00:15:49.786 "allow_unrecognized_csi": false, 00:15:49.786 "method": "bdev_nvme_attach_controller", 00:15:49.786 "req_id": 1 00:15:49.786 } 00:15:49.786 Got JSON-RPC error response 00:15:49.786 response: 00:15:49.786 { 00:15:49.786 "code": -5, 00:15:49.786 "message": "Input/output error" 00:15:49.786 } 00:15:49.786 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:49.786 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:49.786 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:49.786 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:49.786 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:49.786 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:49.786 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:50.045 nvme0n1 00:15:50.045 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:50.045 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:50.045 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.303 01:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.303 01:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.303 01:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.562 01:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 00:15:50.562 01:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.562 01:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.562 01:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.562 01:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:50.562 01:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:50.562 01:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:51.937 nvme0n1 00:15:51.937 01:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:51.937 01:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:51.937 01:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.196 01:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.196 01:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:52.196 01:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.196 01:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.196 01:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.196 01:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:52.196 01:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:52.196 01:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.455 01:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.455 01:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:15:52.455 01:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid f8eaa80b-beb5-4887-8952-726ced1ba196 -l 0 --dhchap-secret DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: --dhchap-ctrl-secret DHHC-1:03:YmVmMDhiNTdmYzU4MzM5YzA4NGQxMmYyMzI3MzI1ZjhhZDgzMjkyMTVhYmQ4NDc4ODZiMTE1MzRjYWVhM2ZmZR0V+g4=: 00:15:53.392 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:53.392 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:53.392 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:53.392 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:53.392 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:53.392 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:53.392 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:53.392 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.392 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.651 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:53.651 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:53.651 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:53.651 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:53.651 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:53.651 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:53.651 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:53.651 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:53.651 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:53.651 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:54.218 request: 00:15:54.218 { 00:15:54.218 "name": "nvme0", 00:15:54.218 "trtype": "tcp", 00:15:54.218 "traddr": "10.0.0.3", 00:15:54.218 "adrfam": "ipv4", 00:15:54.218 "trsvcid": "4420", 00:15:54.218 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:54.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196", 00:15:54.218 "prchk_reftag": false, 00:15:54.218 "prchk_guard": false, 00:15:54.218 "hdgst": false, 00:15:54.218 "ddgst": false, 00:15:54.218 "dhchap_key": "key1", 00:15:54.218 "allow_unrecognized_csi": false, 00:15:54.218 "method": "bdev_nvme_attach_controller", 00:15:54.218 "req_id": 1 00:15:54.218 } 00:15:54.218 Got JSON-RPC error response 00:15:54.218 response: 00:15:54.218 { 00:15:54.218 "code": -5, 00:15:54.218 "message": "Input/output error" 00:15:54.218 } 00:15:54.218 01:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:54.218 01:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:54.218 01:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:54.218 01:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:54.218 01:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:54.218 01:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:54.218 01:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:55.594 nvme0n1 00:15:55.594 01:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:55.594 01:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.594 01:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:55.594 01:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.594 01:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.594 01:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.160 01:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:15:56.160 01:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.160 01:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.160 01:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.160 01:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:56.160 01:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:56.160 01:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:56.419 nvme0n1 00:15:56.419 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:56.419 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.419 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:56.676 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.676 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.676 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.934 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:56.934 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.934 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.934 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.934 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: '' 2s 00:15:56.934 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:56.934 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:56.934 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: 00:15:56.934 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:56.934 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:56.934 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:56.934 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: ]] 00:15:56.934 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MWQ4MjAzZjNkNmQzYjRiZWRkYjA3NWM5Y2FmNGVkNTJyEsWy: 00:15:56.934 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:56.934 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:56.934 01:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: 2s 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: ]] 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZGUwMjRkMDliYzdjNzJhODIwM2RiYzBmZjI5ZDc3MDcxY2E0ZDA2OGY2ODExNDZjw2K3qg==: 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:59.464 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:01.366 01:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:16:01.366 01:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:16:01.366 01:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:16:01.366 01:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:16:01.366 01:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:16:01.366 01:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:16:01.366 01:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:16:01.366 01:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.366 01:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:01.366 01:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.366 01:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.366 01:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.366 01:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:01.366 01:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:01.366 01:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:02.301 nvme0n1 00:16:02.301 01:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:02.301 01:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.301 01:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.301 01:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.301 01:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:02.301 01:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:02.880 01:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:16:02.880 01:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.880 01:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:16:03.140 01:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.140 01:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:16:03.140 01:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.140 01:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.140 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.140 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:16:03.140 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:16:03.399 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:16:03.399 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:16:03.399 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.658 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.658 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:03.658 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.658 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.658 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.658 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:03.658 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:03.658 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:03.658 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:03.658 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.658 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:03.658 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.658 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:03.658 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:04.595 request: 00:16:04.595 { 00:16:04.595 "name": "nvme0", 00:16:04.595 "dhchap_key": "key1", 00:16:04.595 "dhchap_ctrlr_key": "key3", 00:16:04.595 "method": "bdev_nvme_set_keys", 00:16:04.595 "req_id": 1 00:16:04.595 } 00:16:04.595 Got JSON-RPC error response 00:16:04.595 response: 00:16:04.595 { 00:16:04.595 "code": -13, 00:16:04.595 "message": "Permission denied" 00:16:04.595 } 00:16:04.595 01:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:04.595 01:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:04.595 01:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:04.595 01:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:04.595 01:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:04.595 01:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:04.595 01:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.595 01:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:16:04.595 01:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:16:05.529 01:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:05.529 01:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:05.529 01:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.096 01:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:16:06.096 01:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:06.096 01:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.096 01:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.096 01:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.096 01:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:06.096 01:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:06.096 01:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:07.032 nvme0n1 00:16:07.032 01:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:07.032 01:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.032 01:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.032 01:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.032 01:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:07.032 01:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:07.032 01:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:07.032 01:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:07.032 01:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:07.032 01:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:07.032 01:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:07.032 01:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:07.032 01:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:07.600 request: 00:16:07.600 { 00:16:07.600 "name": "nvme0", 00:16:07.600 "dhchap_key": "key2", 00:16:07.600 "dhchap_ctrlr_key": "key0", 00:16:07.600 "method": "bdev_nvme_set_keys", 00:16:07.600 "req_id": 1 00:16:07.600 } 00:16:07.600 Got JSON-RPC error response 00:16:07.600 response: 00:16:07.600 { 00:16:07.600 "code": -13, 00:16:07.600 "message": "Permission denied" 00:16:07.600 } 00:16:07.600 01:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:07.600 01:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:07.600 01:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:07.600 01:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:07.600 01:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:07.600 01:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:07.600 01:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.167 01:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:16:08.167 01:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:16:09.105 01:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:09.105 01:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:09.105 01:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.364 01:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:16:09.364 01:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:16:09.364 01:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:16:09.365 01:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 70052 00:16:09.365 01:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 70052 ']' 00:16:09.365 01:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 70052 00:16:09.365 01:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:09.365 01:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:09.365 01:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70052 00:16:09.365 killing process with pid 70052 00:16:09.365 01:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:09.365 01:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:09.365 01:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70052' 00:16:09.365 01:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 70052 00:16:09.365 01:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 70052 00:16:11.270 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:11.270 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:11.270 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:16:11.270 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:11.270 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:16:11.271 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:11.271 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:11.271 rmmod nvme_tcp 00:16:11.530 rmmod nvme_fabrics 00:16:11.530 rmmod nvme_keyring 00:16:11.530 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:11.530 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:16:11.530 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:16:11.530 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 73087 ']' 00:16:11.530 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 73087 00:16:11.530 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 73087 ']' 00:16:11.530 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 73087 00:16:11.530 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:11.530 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:11.530 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73087 00:16:11.530 killing process with pid 73087 00:16:11.530 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:11.530 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:11.530 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73087' 00:16:11.530 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 73087 00:16:11.530 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 73087 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:12.466 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.GtR /tmp/spdk.key-sha256.dCe /tmp/spdk.key-sha384.xbH /tmp/spdk.key-sha512.xPJ /tmp/spdk.key-sha512.fPj /tmp/spdk.key-sha384.kfP /tmp/spdk.key-sha256.t4Y '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:16:12.725 ************************************ 00:16:12.725 END TEST nvmf_auth_target 00:16:12.725 00:16:12.725 real 3m17.126s 00:16:12.725 user 7m48.285s 00:16:12.725 sys 0m28.216s 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.725 ************************************ 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:12.725 ************************************ 00:16:12.725 START TEST nvmf_bdevio_no_huge 00:16:12.725 ************************************ 00:16:12.725 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:12.985 * Looking for test storage... 00:16:12.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:12.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.985 --rc genhtml_branch_coverage=1 00:16:12.985 --rc genhtml_function_coverage=1 00:16:12.985 --rc genhtml_legend=1 00:16:12.985 --rc geninfo_all_blocks=1 00:16:12.985 --rc geninfo_unexecuted_blocks=1 00:16:12.985 00:16:12.985 ' 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:12.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.985 --rc genhtml_branch_coverage=1 00:16:12.985 --rc genhtml_function_coverage=1 00:16:12.985 --rc genhtml_legend=1 00:16:12.985 --rc geninfo_all_blocks=1 00:16:12.985 --rc geninfo_unexecuted_blocks=1 00:16:12.985 00:16:12.985 ' 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:12.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.985 --rc genhtml_branch_coverage=1 00:16:12.985 --rc genhtml_function_coverage=1 00:16:12.985 --rc genhtml_legend=1 00:16:12.985 --rc geninfo_all_blocks=1 00:16:12.985 --rc geninfo_unexecuted_blocks=1 00:16:12.985 00:16:12.985 ' 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:12.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.985 --rc genhtml_branch_coverage=1 00:16:12.985 --rc genhtml_function_coverage=1 00:16:12.985 --rc genhtml_legend=1 00:16:12.985 --rc geninfo_all_blocks=1 00:16:12.985 --rc geninfo_unexecuted_blocks=1 00:16:12.985 00:16:12.985 ' 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:12.985 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:12.985 Cannot find device "nvmf_init_br" 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:12.985 Cannot find device "nvmf_init_br2" 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:12.985 Cannot find device "nvmf_tgt_br" 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.985 Cannot find device "nvmf_tgt_br2" 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:12.985 Cannot find device "nvmf_init_br" 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:12.985 Cannot find device "nvmf_init_br2" 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:12.985 Cannot find device "nvmf_tgt_br" 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:16:12.985 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:13.243 Cannot find device "nvmf_tgt_br2" 00:16:13.243 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:16:13.243 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:13.243 Cannot find device "nvmf_br" 00:16:13.243 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:16:13.243 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:13.243 Cannot find device "nvmf_init_if" 00:16:13.243 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:16:13.243 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:13.243 Cannot find device "nvmf_init_if2" 00:16:13.243 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:16:13.243 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:13.243 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.243 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:16:13.243 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:13.243 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.243 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:16:13.243 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:13.243 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:13.243 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:13.243 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:13.243 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:13.243 01:29:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:13.243 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:13.243 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:13.243 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:13.243 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:13.243 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:13.243 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:13.243 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:13.243 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:13.243 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:13.243 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:13.243 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:13.243 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:13.243 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:13.243 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:13.243 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:13.243 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:13.244 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:13.244 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:13.244 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:13.244 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:13.244 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:13.244 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:13.244 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:13.244 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:13.244 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:13.244 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:13.244 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:13.244 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:13.244 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:16:13.244 00:16:13.244 --- 10.0.0.3 ping statistics --- 00:16:13.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.244 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:13.244 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:13.244 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:13.244 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:16:13.244 00:16:13.244 --- 10.0.0.4 ping statistics --- 00:16:13.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.244 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:13.244 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:13.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:13.244 00:16:13.244 --- 10.0.0.1 ping statistics --- 00:16:13.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.244 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:13.244 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:13.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:16:13.503 00:16:13.503 --- 10.0.0.2 ping statistics --- 00:16:13.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.503 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # return 0 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=73775 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 73775 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 73775 ']' 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:13.503 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:13.503 [2024-09-28 01:29:09.328432] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:13.503 [2024-09-28 01:29:09.328658] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:13.763 [2024-09-28 01:29:09.536678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:14.022 [2024-09-28 01:29:09.797696] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.022 [2024-09-28 01:29:09.797781] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.022 [2024-09-28 01:29:09.797797] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.022 [2024-09-28 01:29:09.797809] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.022 [2024-09-28 01:29:09.797819] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.022 [2024-09-28 01:29:09.798008] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:16:14.022 [2024-09-28 01:29:09.798413] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:16:14.022 [2024-09-28 01:29:09.798521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.022 [2024-09-28 01:29:09.798540] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:16:14.022 [2024-09-28 01:29:09.942993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:14.590 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:14.590 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:16:14.590 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:14.590 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:14.590 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:14.590 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.590 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:14.590 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.590 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:14.590 [2024-09-28 01:29:10.299072] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.590 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.590 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:14.590 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.590 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:14.590 Malloc0 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:14.591 [2024-09-28 01:29:10.394973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:16:14.591 { 00:16:14.591 "params": { 00:16:14.591 "name": "Nvme$subsystem", 00:16:14.591 "trtype": "$TEST_TRANSPORT", 00:16:14.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:14.591 "adrfam": "ipv4", 00:16:14.591 "trsvcid": "$NVMF_PORT", 00:16:14.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:14.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:14.591 "hdgst": ${hdgst:-false}, 00:16:14.591 "ddgst": ${ddgst:-false} 00:16:14.591 }, 00:16:14.591 "method": "bdev_nvme_attach_controller" 00:16:14.591 } 00:16:14.591 EOF 00:16:14.591 )") 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:16:14.591 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:16:14.591 "params": { 00:16:14.591 "name": "Nvme1", 00:16:14.591 "trtype": "tcp", 00:16:14.591 "traddr": "10.0.0.3", 00:16:14.591 "adrfam": "ipv4", 00:16:14.591 "trsvcid": "4420", 00:16:14.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:14.591 "hdgst": false, 00:16:14.591 "ddgst": false 00:16:14.591 }, 00:16:14.591 "method": "bdev_nvme_attach_controller" 00:16:14.591 }' 00:16:14.591 [2024-09-28 01:29:10.514851] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:14.591 [2024-09-28 01:29:10.515023] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid73811 ] 00:16:14.867 [2024-09-28 01:29:10.717111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:15.146 [2024-09-28 01:29:10.955906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.146 [2024-09-28 01:29:10.956029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.146 [2024-09-28 01:29:10.956054] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.404 [2024-09-28 01:29:11.115091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:15.662 I/O targets: 00:16:15.662 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:15.662 00:16:15.662 00:16:15.662 CUnit - A unit testing framework for C - Version 2.1-3 00:16:15.662 http://cunit.sourceforge.net/ 00:16:15.662 00:16:15.662 00:16:15.662 Suite: bdevio tests on: Nvme1n1 00:16:15.662 Test: blockdev write read block ...passed 00:16:15.662 Test: blockdev write zeroes read block ...passed 00:16:15.662 Test: blockdev write zeroes read no split ...passed 00:16:15.662 Test: blockdev write zeroes read split ...passed 00:16:15.662 Test: blockdev write zeroes read split partial ...passed 00:16:15.662 Test: blockdev reset ...[2024-09-28 01:29:11.457890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:15.662 [2024-09-28 01:29:11.458059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:16:15.662 [2024-09-28 01:29:11.472213] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:15.662 passed 00:16:15.662 Test: blockdev write read 8 blocks ...passed 00:16:15.662 Test: blockdev write read size > 128k ...passed 00:16:15.662 Test: blockdev write read invalid size ...passed 00:16:15.662 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:15.662 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:15.662 Test: blockdev write read max offset ...passed 00:16:15.662 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:15.662 Test: blockdev writev readv 8 blocks ...passed 00:16:15.662 Test: blockdev writev readv 30 x 1block ...passed 00:16:15.662 Test: blockdev writev readv block ...passed 00:16:15.662 Test: blockdev writev readv size > 128k ...passed 00:16:15.662 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:15.662 Test: blockdev comparev and writev ...[2024-09-28 01:29:11.485505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:15.662 [2024-09-28 01:29:11.485568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:15.662 [2024-09-28 01:29:11.485602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:15.662 [2024-09-28 01:29:11.485623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:15.662 [2024-09-28 01:29:11.486248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:15.662 [2024-09-28 01:29:11.486298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:15.662 [2024-09-28 01:29:11.486327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:15.662 [2024-09-28 01:29:11.486346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:15.662 [2024-09-28 01:29:11.486782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:15.662 [2024-09-28 01:29:11.486953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:15.662 [2024-09-28 01:29:11.487298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:15.662 [2024-09-28 01:29:11.487341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:15.662 [2024-09-28 01:29:11.487803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:15.662 [2024-09-28 01:29:11.487850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:15.662 [2024-09-28 01:29:11.487879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:15.662 [2024-09-28 01:29:11.487898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:15.662 passed 00:16:15.662 Test: blockdev nvme passthru rw ...passed 00:16:15.662 Test: blockdev nvme passthru vendor specific ...[2024-09-28 01:29:11.489332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:15.662 [2024-09-28 01:29:11.489575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:15.662 [2024-09-28 01:29:11.489824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:15.662 [2024-09-28 01:29:11.489934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:15.662 [2024-09-28 01:29:11.490288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:15.662 [2024-09-28 01:29:11.490335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:15.662 passed[2024-09-28 01:29:11.490502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:15.662 [2024-09-28 01:29:11.490551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:15.663 00:16:15.663 Test: blockdev nvme admin passthru ...passed 00:16:15.663 Test: blockdev copy ...passed 00:16:15.663 00:16:15.663 Run Summary: Type Total Ran Passed Failed Inactive 00:16:15.663 suites 1 1 n/a 0 0 00:16:15.663 tests 23 23 23 0 0 00:16:15.663 asserts 152 152 152 0 n/a 00:16:15.663 00:16:15.663 Elapsed time = 0.239 seconds 00:16:16.595 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.595 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.595 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:16.596 rmmod nvme_tcp 00:16:16.596 rmmod nvme_fabrics 00:16:16.596 rmmod nvme_keyring 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 73775 ']' 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 73775 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 73775 ']' 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 73775 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73775 00:16:16.596 killing process with pid 73775 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73775' 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 73775 00:16:16.596 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 73775 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:17.530 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:17.787 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:17.787 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:17.787 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.787 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:17.787 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.787 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:16:17.787 00:16:17.787 real 0m4.928s 00:16:17.787 user 0m16.497s 00:16:17.787 sys 0m1.589s 00:16:17.787 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:17.787 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:17.787 ************************************ 00:16:17.787 END TEST nvmf_bdevio_no_huge 00:16:17.787 ************************************ 00:16:17.787 01:29:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:17.787 01:29:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:17.787 01:29:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:17.788 01:29:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:17.788 ************************************ 00:16:17.788 START TEST nvmf_tls 00:16:17.788 ************************************ 00:16:17.788 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:17.788 * Looking for test storage... 00:16:17.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:17.788 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:17.788 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:16:17.788 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:18.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.046 --rc genhtml_branch_coverage=1 00:16:18.046 --rc genhtml_function_coverage=1 00:16:18.046 --rc genhtml_legend=1 00:16:18.046 --rc geninfo_all_blocks=1 00:16:18.046 --rc geninfo_unexecuted_blocks=1 00:16:18.046 00:16:18.046 ' 00:16:18.046 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:18.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.046 --rc genhtml_branch_coverage=1 00:16:18.046 --rc genhtml_function_coverage=1 00:16:18.046 --rc genhtml_legend=1 00:16:18.046 --rc geninfo_all_blocks=1 00:16:18.046 --rc geninfo_unexecuted_blocks=1 00:16:18.046 00:16:18.046 ' 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:18.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.047 --rc genhtml_branch_coverage=1 00:16:18.047 --rc genhtml_function_coverage=1 00:16:18.047 --rc genhtml_legend=1 00:16:18.047 --rc geninfo_all_blocks=1 00:16:18.047 --rc geninfo_unexecuted_blocks=1 00:16:18.047 00:16:18.047 ' 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:18.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.047 --rc genhtml_branch_coverage=1 00:16:18.047 --rc genhtml_function_coverage=1 00:16:18.047 --rc genhtml_legend=1 00:16:18.047 --rc geninfo_all_blocks=1 00:16:18.047 --rc geninfo_unexecuted_blocks=1 00:16:18.047 00:16:18.047 ' 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:18.047 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:18.047 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:18.048 Cannot find device "nvmf_init_br" 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:18.048 Cannot find device "nvmf_init_br2" 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:18.048 Cannot find device "nvmf_tgt_br" 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:18.048 Cannot find device "nvmf_tgt_br2" 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:18.048 Cannot find device "nvmf_init_br" 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:18.048 Cannot find device "nvmf_init_br2" 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:18.048 Cannot find device "nvmf_tgt_br" 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:18.048 Cannot find device "nvmf_tgt_br2" 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:18.048 Cannot find device "nvmf_br" 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:18.048 Cannot find device "nvmf_init_if" 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:18.048 Cannot find device "nvmf_init_if2" 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:18.048 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:18.048 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:18.048 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:18.307 01:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:18.307 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:18.307 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:16:18.307 00:16:18.307 --- 10.0.0.3 ping statistics --- 00:16:18.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.307 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:18.307 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:18.307 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:16:18.307 00:16:18.307 --- 10.0.0.4 ping statistics --- 00:16:18.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.307 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:18.307 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:18.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:18.307 00:16:18.307 --- 10.0.0.1 ping statistics --- 00:16:18.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.308 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:18.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:16:18.308 00:16:18.308 --- 10.0.0.2 ping statistics --- 00:16:18.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.308 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # return 0 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=74087 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 74087 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74087 ']' 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:18.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:18.308 01:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:18.566 [2024-09-28 01:29:14.329880] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:18.566 [2024-09-28 01:29:14.330067] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.824 [2024-09-28 01:29:14.514406] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.824 [2024-09-28 01:29:14.754889] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.824 [2024-09-28 01:29:14.754972] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.824 [2024-09-28 01:29:14.754999] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.824 [2024-09-28 01:29:14.755021] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.824 [2024-09-28 01:29:14.755037] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.824 [2024-09-28 01:29:14.755087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.759 01:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:19.759 01:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:19.759 01:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:19.759 01:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:19.759 01:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:19.759 01:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.759 01:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:16:19.759 01:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:20.018 true 00:16:20.018 01:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:16:20.018 01:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:20.276 01:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:16:20.276 01:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:16:20.276 01:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:20.534 01:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:20.534 01:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:16:20.792 01:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:16:20.792 01:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:16:20.792 01:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:21.051 01:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:21.051 01:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:16:21.309 01:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:16:21.309 01:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:16:21.309 01:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:21.309 01:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:16:21.568 01:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:16:21.568 01:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:16:21.568 01:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:21.827 01:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:21.827 01:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:22.085 01:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:16:22.085 01:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:16:22.085 01:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:22.343 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:22.343 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:22.602 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:16:22.602 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:16:22.602 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:22.602 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:22.602 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:16:22.602 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:16:22.602 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:16:22.602 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:16:22.602 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.sjHXYMivbW 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.PZvu5HHsDc 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.sjHXYMivbW 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.PZvu5HHsDc 00:16:22.861 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:23.120 01:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:23.688 [2024-09-28 01:29:19.349062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:23.688 01:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.sjHXYMivbW 00:16:23.688 01:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.sjHXYMivbW 00:16:23.688 01:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:23.946 [2024-09-28 01:29:19.727744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.946 01:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:24.205 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:24.463 [2024-09-28 01:29:20.264081] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:24.463 [2024-09-28 01:29:20.264473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:24.463 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:24.723 malloc0 00:16:24.723 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:24.982 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.sjHXYMivbW 00:16:25.240 01:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:25.499 01:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.sjHXYMivbW 00:16:37.729 Initializing NVMe Controllers 00:16:37.729 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:37.729 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:37.729 Initialization complete. Launching workers. 00:16:37.729 ======================================================== 00:16:37.729 Latency(us) 00:16:37.729 Device Information : IOPS MiB/s Average min max 00:16:37.729 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7347.77 28.70 8712.56 2424.81 11976.60 00:16:37.729 ======================================================== 00:16:37.729 Total : 7347.77 28.70 8712.56 2424.81 11976.60 00:16:37.729 00:16:37.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:37.729 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sjHXYMivbW 00:16:37.729 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:37.729 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:37.729 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:37.729 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.sjHXYMivbW 00:16:37.729 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:37.729 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74338 00:16:37.729 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:37.730 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74338 /var/tmp/bdevperf.sock 00:16:37.730 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74338 ']' 00:16:37.730 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:37.730 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:37.730 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:37.730 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:37.730 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:37.730 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:37.730 [2024-09-28 01:29:31.828662] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:37.730 [2024-09-28 01:29:31.829065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74338 ] 00:16:37.730 [2024-09-28 01:29:31.995320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.730 [2024-09-28 01:29:32.159366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.730 [2024-09-28 01:29:32.318859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:37.730 01:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:37.730 01:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:37.730 01:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sjHXYMivbW 00:16:37.730 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:37.730 [2024-09-28 01:29:33.277002] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:37.730 TLSTESTn1 00:16:37.730 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:37.730 Running I/O for 10 seconds... 00:16:47.960 2952.00 IOPS, 11.53 MiB/s 2994.00 IOPS, 11.70 MiB/s 2996.67 IOPS, 11.71 MiB/s 2997.75 IOPS, 11.71 MiB/s 3001.60 IOPS, 11.72 MiB/s 3018.83 IOPS, 11.79 MiB/s 3005.86 IOPS, 11.74 MiB/s 2991.00 IOPS, 11.68 MiB/s 2992.78 IOPS, 11.69 MiB/s 3003.80 IOPS, 11.73 MiB/s 00:16:47.960 Latency(us) 00:16:47.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.960 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:47.960 Verification LBA range: start 0x0 length 0x2000 00:16:47.960 TLSTESTn1 : 10.02 3009.90 11.76 0.00 0.00 42447.08 7804.74 38130.04 00:16:47.960 =================================================================================================================== 00:16:47.960 Total : 3009.90 11.76 0.00 0.00 42447.08 7804.74 38130.04 00:16:47.960 { 00:16:47.960 "results": [ 00:16:47.960 { 00:16:47.960 "job": "TLSTESTn1", 00:16:47.960 "core_mask": "0x4", 00:16:47.960 "workload": "verify", 00:16:47.960 "status": "finished", 00:16:47.960 "verify_range": { 00:16:47.960 "start": 0, 00:16:47.960 "length": 8192 00:16:47.960 }, 00:16:47.960 "queue_depth": 128, 00:16:47.960 "io_size": 4096, 00:16:47.960 "runtime": 10.022255, 00:16:47.960 "iops": 3009.90146429122, 00:16:47.960 "mibps": 11.757427594887577, 00:16:47.960 "io_failed": 0, 00:16:47.960 "io_timeout": 0, 00:16:47.960 "avg_latency_us": 42447.07959472736, 00:16:47.960 "min_latency_us": 7804.741818181818, 00:16:47.960 "max_latency_us": 38130.03636363636 00:16:47.960 } 00:16:47.960 ], 00:16:47.960 "core_count": 1 00:16:47.960 } 00:16:47.960 01:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:47.960 01:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 74338 00:16:47.960 01:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74338 ']' 00:16:47.960 01:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74338 00:16:47.960 01:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:47.960 01:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:47.960 01:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74338 00:16:47.960 killing process with pid 74338 00:16:47.960 Received shutdown signal, test time was about 10.000000 seconds 00:16:47.960 00:16:47.960 Latency(us) 00:16:47.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.960 =================================================================================================================== 00:16:47.960 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:47.960 01:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:47.960 01:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:47.960 01:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74338' 00:16:47.960 01:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74338 00:16:47.960 01:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74338 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PZvu5HHsDc 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PZvu5HHsDc 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PZvu5HHsDc 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PZvu5HHsDc 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74484 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74484 /var/tmp/bdevperf.sock 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74484 ']' 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:48.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:48.928 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:48.928 [2024-09-28 01:29:44.811870] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:48.928 [2024-09-28 01:29:44.813054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74484 ] 00:16:49.189 [2024-09-28 01:29:44.991794] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.449 [2024-09-28 01:29:45.160415] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.449 [2024-09-28 01:29:45.327898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:50.017 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.017 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:50.017 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PZvu5HHsDc 00:16:50.275 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:50.533 [2024-09-28 01:29:46.367543] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:50.533 [2024-09-28 01:29:46.376812] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spd[2024-09-28 01:29:46.376818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:50.533 (107): Transport endpoint is not connected 00:16:50.533 [2024-09-28 01:29:46.377898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:50.533 [2024-09-28 01:29:46.378908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:50.533 [2024-09-28 01:29:46.378950] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:50.533 [2024-09-28 01:29:46.378994] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:50.533 [2024-09-28 01:29:46.379012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:50.533 request: 00:16:50.533 { 00:16:50.533 "name": "TLSTEST", 00:16:50.533 "trtype": "tcp", 00:16:50.533 "traddr": "10.0.0.3", 00:16:50.533 "adrfam": "ipv4", 00:16:50.533 "trsvcid": "4420", 00:16:50.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:50.533 "prchk_reftag": false, 00:16:50.533 "prchk_guard": false, 00:16:50.533 "hdgst": false, 00:16:50.533 "ddgst": false, 00:16:50.533 "psk": "key0", 00:16:50.533 "allow_unrecognized_csi": false, 00:16:50.533 "method": "bdev_nvme_attach_controller", 00:16:50.533 "req_id": 1 00:16:50.533 } 00:16:50.533 Got JSON-RPC error response 00:16:50.533 response: 00:16:50.533 { 00:16:50.533 "code": -5, 00:16:50.533 "message": "Input/output error" 00:16:50.533 } 00:16:50.533 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74484 00:16:50.533 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74484 ']' 00:16:50.533 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74484 00:16:50.533 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:50.533 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:50.533 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74484 00:16:50.533 killing process with pid 74484 00:16:50.533 Received shutdown signal, test time was about 10.000000 seconds 00:16:50.533 00:16:50.533 Latency(us) 00:16:50.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.533 =================================================================================================================== 00:16:50.533 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:50.533 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:50.533 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:50.533 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74484' 00:16:50.533 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74484 00:16:50.533 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74484 00:16:51.912 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sjHXYMivbW 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sjHXYMivbW 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sjHXYMivbW 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.sjHXYMivbW 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74520 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74520 /var/tmp/bdevperf.sock 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74520 ']' 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:51.913 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.913 [2024-09-28 01:29:47.631441] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:51.913 [2024-09-28 01:29:47.631924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74520 ] 00:16:51.913 [2024-09-28 01:29:47.804614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.172 [2024-09-28 01:29:47.980906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.431 [2024-09-28 01:29:48.148357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:52.690 01:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.690 01:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:52.690 01:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sjHXYMivbW 00:16:53.258 01:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:16:53.258 [2024-09-28 01:29:49.148026] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:53.258 [2024-09-28 01:29:49.157024] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:53.258 [2024-09-28 01:29:49.157073] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:53.258 [2024-09-28 01:29:49.157153] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:53.258 [2024-09-28 01:29:49.157322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:16:53.258 [2024-09-28 01:29:49.158301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:53.258 [2024-09-28 01:29:49.159297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:53.258 [2024-09-28 01:29:49.159692] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:53.258 [2024-09-28 01:29:49.159732] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:53.258 [2024-09-28 01:29:49.159753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:53.258 request: 00:16:53.258 { 00:16:53.258 "name": "TLSTEST", 00:16:53.258 "trtype": "tcp", 00:16:53.258 "traddr": "10.0.0.3", 00:16:53.258 "adrfam": "ipv4", 00:16:53.258 "trsvcid": "4420", 00:16:53.258 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.258 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:53.258 "prchk_reftag": false, 00:16:53.258 "prchk_guard": false, 00:16:53.258 "hdgst": false, 00:16:53.258 "ddgst": false, 00:16:53.258 "psk": "key0", 00:16:53.258 "allow_unrecognized_csi": false, 00:16:53.258 "method": "bdev_nvme_attach_controller", 00:16:53.258 "req_id": 1 00:16:53.258 } 00:16:53.258 Got JSON-RPC error response 00:16:53.258 response: 00:16:53.258 { 00:16:53.259 "code": -5, 00:16:53.259 "message": "Input/output error" 00:16:53.259 } 00:16:53.259 01:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74520 00:16:53.259 01:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74520 ']' 00:16:53.259 01:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74520 00:16:53.259 01:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:53.259 01:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:53.259 01:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74520 00:16:53.517 01:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:53.517 01:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:53.517 01:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74520' 00:16:53.517 killing process with pid 74520 00:16:53.517 01:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74520 00:16:53.517 Received shutdown signal, test time was about 10.000000 seconds 00:16:53.517 00:16:53.517 Latency(us) 00:16:53.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.517 =================================================================================================================== 00:16:53.517 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:53.517 01:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74520 00:16:54.453 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:54.453 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:54.453 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:54.453 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:54.453 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:54.453 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sjHXYMivbW 00:16:54.453 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:54.453 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sjHXYMivbW 00:16:54.453 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:54.453 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.453 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:54.453 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.453 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sjHXYMivbW 00:16:54.453 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:54.453 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:54.454 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:54.454 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.sjHXYMivbW 00:16:54.454 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:54.454 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74561 00:16:54.454 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:54.454 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:54.454 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74561 /var/tmp/bdevperf.sock 00:16:54.454 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74561 ']' 00:16:54.454 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:54.454 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:54.454 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:54.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:54.454 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:54.454 01:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:54.712 [2024-09-28 01:29:50.410405] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:54.712 [2024-09-28 01:29:50.410863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74561 ] 00:16:54.712 [2024-09-28 01:29:50.583514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.972 [2024-09-28 01:29:50.749183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.231 [2024-09-28 01:29:50.918955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:55.490 01:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:55.490 01:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:55.490 01:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sjHXYMivbW 00:16:55.749 01:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:56.008 [2024-09-28 01:29:51.854562] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:56.008 [2024-09-28 01:29:51.865670] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:56.008 [2024-09-28 01:29:51.865717] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:56.008 [2024-09-28 01:29:51.865778] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:56.008 [2024-09-28 01:29:51.866366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:16:56.008 [2024-09-28 01:29:51.867352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:56.008 [2024-09-28 01:29:51.868335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:56.008 [2024-09-28 01:29:51.868394] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:56.008 [2024-09-28 01:29:51.868444] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:56.008 [2024-09-28 01:29:51.868477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:56.008 request: 00:16:56.008 { 00:16:56.008 "name": "TLSTEST", 00:16:56.008 "trtype": "tcp", 00:16:56.008 "traddr": "10.0.0.3", 00:16:56.008 "adrfam": "ipv4", 00:16:56.008 "trsvcid": "4420", 00:16:56.008 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:56.008 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:56.008 "prchk_reftag": false, 00:16:56.008 "prchk_guard": false, 00:16:56.008 "hdgst": false, 00:16:56.008 "ddgst": false, 00:16:56.008 "psk": "key0", 00:16:56.008 "allow_unrecognized_csi": false, 00:16:56.008 "method": "bdev_nvme_attach_controller", 00:16:56.008 "req_id": 1 00:16:56.008 } 00:16:56.008 Got JSON-RPC error response 00:16:56.008 response: 00:16:56.008 { 00:16:56.008 "code": -5, 00:16:56.008 "message": "Input/output error" 00:16:56.008 } 00:16:56.008 01:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74561 00:16:56.008 01:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74561 ']' 00:16:56.008 01:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74561 00:16:56.008 01:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:56.008 01:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:56.008 01:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74561 00:16:56.008 killing process with pid 74561 00:16:56.008 Received shutdown signal, test time was about 10.000000 seconds 00:16:56.008 00:16:56.008 Latency(us) 00:16:56.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.008 =================================================================================================================== 00:16:56.008 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:56.008 01:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:56.008 01:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:56.008 01:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74561' 00:16:56.008 01:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74561 00:16:56.008 01:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74561 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74607 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:57.386 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:57.387 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74607 /var/tmp/bdevperf.sock 00:16:57.387 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74607 ']' 00:16:57.387 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:57.387 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:57.387 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:57.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:57.387 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:57.387 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.387 [2024-09-28 01:29:53.222743] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:57.387 [2024-09-28 01:29:53.223154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74607 ] 00:16:57.646 [2024-09-28 01:29:53.394992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.905 [2024-09-28 01:29:53.633057] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.163 [2024-09-28 01:29:53.839002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:58.422 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:58.422 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:58.422 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:58.681 [2024-09-28 01:29:54.458205] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:58.681 [2024-09-28 01:29:54.458699] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:58.681 request: 00:16:58.681 { 00:16:58.681 "name": "key0", 00:16:58.681 "path": "", 00:16:58.681 "method": "keyring_file_add_key", 00:16:58.681 "req_id": 1 00:16:58.681 } 00:16:58.681 Got JSON-RPC error response 00:16:58.681 response: 00:16:58.681 { 00:16:58.681 "code": -1, 00:16:58.681 "message": "Operation not permitted" 00:16:58.681 } 00:16:58.681 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:58.940 [2024-09-28 01:29:54.766414] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:58.940 [2024-09-28 01:29:54.766555] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:58.940 request: 00:16:58.940 { 00:16:58.940 "name": "TLSTEST", 00:16:58.940 "trtype": "tcp", 00:16:58.940 "traddr": "10.0.0.3", 00:16:58.940 "adrfam": "ipv4", 00:16:58.940 "trsvcid": "4420", 00:16:58.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.940 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.940 "prchk_reftag": false, 00:16:58.940 "prchk_guard": false, 00:16:58.940 "hdgst": false, 00:16:58.940 "ddgst": false, 00:16:58.940 "psk": "key0", 00:16:58.940 "allow_unrecognized_csi": false, 00:16:58.940 "method": "bdev_nvme_attach_controller", 00:16:58.940 "req_id": 1 00:16:58.940 } 00:16:58.940 Got JSON-RPC error response 00:16:58.940 response: 00:16:58.940 { 00:16:58.940 "code": -126, 00:16:58.940 "message": "Required key not available" 00:16:58.940 } 00:16:58.940 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74607 00:16:58.940 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74607 ']' 00:16:58.940 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74607 00:16:58.940 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:58.940 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:58.940 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74607 00:16:58.940 killing process with pid 74607 00:16:58.940 Received shutdown signal, test time was about 10.000000 seconds 00:16:58.940 00:16:58.940 Latency(us) 00:16:58.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.940 =================================================================================================================== 00:16:58.940 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:58.940 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:58.940 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:58.940 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74607' 00:16:58.940 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74607 00:16:58.940 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74607 00:17:00.318 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:00.318 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:00.318 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:00.318 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:00.318 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:00.318 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 74087 00:17:00.318 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74087 ']' 00:17:00.318 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74087 00:17:00.318 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:00.318 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.318 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74087 00:17:00.318 killing process with pid 74087 00:17:00.318 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:00.318 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:00.318 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74087' 00:17:00.318 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74087 00:17:00.318 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74087 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.BKxIJCZRj7 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.BKxIJCZRj7 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=74670 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 74670 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74670 ']' 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:01.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:01.268 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:01.545 [2024-09-28 01:29:57.301201] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:01.545 [2024-09-28 01:29:57.301655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.545 [2024-09-28 01:29:57.470261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.804 [2024-09-28 01:29:57.637932] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.804 [2024-09-28 01:29:57.637993] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.804 [2024-09-28 01:29:57.638028] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.804 [2024-09-28 01:29:57.638044] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.804 [2024-09-28 01:29:57.638056] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.804 [2024-09-28 01:29:57.638091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.063 [2024-09-28 01:29:57.795255] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:02.321 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.321 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:02.321 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:02.321 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:02.321 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.322 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.322 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.BKxIJCZRj7 00:17:02.322 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BKxIJCZRj7 00:17:02.322 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:02.581 [2024-09-28 01:29:58.447966] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.581 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:02.839 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:03.097 [2024-09-28 01:29:58.980121] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:03.097 [2024-09-28 01:29:58.980426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:03.098 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:03.356 malloc0 00:17:03.356 01:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:03.923 01:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BKxIJCZRj7 00:17:03.923 01:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:04.181 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BKxIJCZRj7 00:17:04.181 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:04.181 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:04.181 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:04.181 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BKxIJCZRj7 00:17:04.181 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:04.181 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74730 00:17:04.181 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:04.181 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:04.182 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74730 /var/tmp/bdevperf.sock 00:17:04.182 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74730 ']' 00:17:04.182 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:04.182 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:04.182 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:04.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:04.182 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:04.182 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.440 [2024-09-28 01:30:00.126625] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:04.440 [2024-09-28 01:30:00.127008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74730 ] 00:17:04.440 [2024-09-28 01:30:00.290618] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.699 [2024-09-28 01:30:00.503125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.958 [2024-09-28 01:30:00.673434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:05.525 01:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:05.525 01:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:05.525 01:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BKxIJCZRj7 00:17:05.784 01:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:06.042 [2024-09-28 01:30:01.717719] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:06.042 TLSTESTn1 00:17:06.042 01:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:06.042 Running I/O for 10 seconds... 00:17:16.330 2885.00 IOPS, 11.27 MiB/s 2880.00 IOPS, 11.25 MiB/s 2944.00 IOPS, 11.50 MiB/s 2978.75 IOPS, 11.64 MiB/s 2969.60 IOPS, 11.60 MiB/s 2952.00 IOPS, 11.53 MiB/s 2964.57 IOPS, 11.58 MiB/s 2959.12 IOPS, 11.56 MiB/s 2944.00 IOPS, 11.50 MiB/s 2931.20 IOPS, 11.45 MiB/s 00:17:16.330 Latency(us) 00:17:16.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.330 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:16.331 Verification LBA range: start 0x0 length 0x2000 00:17:16.331 TLSTESTn1 : 10.03 2934.96 11.46 0.00 0.00 43517.97 7983.48 28835.84 00:17:16.331 =================================================================================================================== 00:17:16.331 Total : 2934.96 11.46 0.00 0.00 43517.97 7983.48 28835.84 00:17:16.331 { 00:17:16.331 "results": [ 00:17:16.331 { 00:17:16.331 "job": "TLSTESTn1", 00:17:16.331 "core_mask": "0x4", 00:17:16.331 "workload": "verify", 00:17:16.331 "status": "finished", 00:17:16.331 "verify_range": { 00:17:16.331 "start": 0, 00:17:16.331 "length": 8192 00:17:16.331 }, 00:17:16.331 "queue_depth": 128, 00:17:16.331 "io_size": 4096, 00:17:16.331 "runtime": 10.030785, 00:17:16.331 "iops": 2934.9647111367653, 00:17:16.331 "mibps": 11.46470590287799, 00:17:16.331 "io_failed": 0, 00:17:16.331 "io_timeout": 0, 00:17:16.331 "avg_latency_us": 43517.965154150195, 00:17:16.331 "min_latency_us": 7983.476363636363, 00:17:16.331 "max_latency_us": 28835.84 00:17:16.331 } 00:17:16.331 ], 00:17:16.331 "core_count": 1 00:17:16.331 } 00:17:16.331 01:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:16.331 01:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 74730 00:17:16.331 01:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74730 ']' 00:17:16.331 01:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74730 00:17:16.331 01:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:16.331 01:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:16.331 01:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74730 00:17:16.331 killing process with pid 74730 00:17:16.331 Received shutdown signal, test time was about 10.000000 seconds 00:17:16.331 00:17:16.331 Latency(us) 00:17:16.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.331 =================================================================================================================== 00:17:16.331 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.331 01:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:16.331 01:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:16.331 01:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74730' 00:17:16.331 01:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74730 00:17:16.331 01:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74730 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.BKxIJCZRj7 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BKxIJCZRj7 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BKxIJCZRj7 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BKxIJCZRj7 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BKxIJCZRj7 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74873 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74873 /var/tmp/bdevperf.sock 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74873 ']' 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:17.267 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:17.526 [2024-09-28 01:30:13.298967] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:17.526 [2024-09-28 01:30:13.299139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74873 ] 00:17:17.785 [2024-09-28 01:30:13.476557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.785 [2024-09-28 01:30:13.651572] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.045 [2024-09-28 01:30:13.828335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:18.304 01:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:18.304 01:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:18.304 01:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BKxIJCZRj7 00:17:18.872 [2024-09-28 01:30:14.511622] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BKxIJCZRj7': 0100666 00:17:18.872 [2024-09-28 01:30:14.511682] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:18.872 request: 00:17:18.872 { 00:17:18.872 "name": "key0", 00:17:18.872 "path": "/tmp/tmp.BKxIJCZRj7", 00:17:18.872 "method": "keyring_file_add_key", 00:17:18.872 "req_id": 1 00:17:18.872 } 00:17:18.872 Got JSON-RPC error response 00:17:18.872 response: 00:17:18.872 { 00:17:18.872 "code": -1, 00:17:18.872 "message": "Operation not permitted" 00:17:18.872 } 00:17:18.872 01:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:19.131 [2024-09-28 01:30:14.823994] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:19.131 [2024-09-28 01:30:14.824589] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:19.131 request: 00:17:19.131 { 00:17:19.131 "name": "TLSTEST", 00:17:19.131 "trtype": "tcp", 00:17:19.131 "traddr": "10.0.0.3", 00:17:19.131 "adrfam": "ipv4", 00:17:19.131 "trsvcid": "4420", 00:17:19.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.131 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.131 "prchk_reftag": false, 00:17:19.131 "prchk_guard": false, 00:17:19.131 "hdgst": false, 00:17:19.131 "ddgst": false, 00:17:19.131 "psk": "key0", 00:17:19.131 "allow_unrecognized_csi": false, 00:17:19.131 "method": "bdev_nvme_attach_controller", 00:17:19.131 "req_id": 1 00:17:19.131 } 00:17:19.131 Got JSON-RPC error response 00:17:19.131 response: 00:17:19.131 { 00:17:19.131 "code": -126, 00:17:19.131 "message": "Required key not available" 00:17:19.131 } 00:17:19.131 01:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74873 00:17:19.131 01:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74873 ']' 00:17:19.131 01:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74873 00:17:19.131 01:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:19.131 01:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:19.131 01:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74873 00:17:19.131 killing process with pid 74873 00:17:19.131 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.131 00:17:19.131 Latency(us) 00:17:19.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.131 =================================================================================================================== 00:17:19.131 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:19.131 01:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:19.131 01:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:19.131 01:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74873' 00:17:19.131 01:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74873 00:17:19.131 01:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74873 00:17:20.068 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:20.068 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:20.068 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:20.068 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:20.068 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:20.068 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 74670 00:17:20.068 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74670 ']' 00:17:20.068 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74670 00:17:20.068 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:20.068 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:20.068 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74670 00:17:20.326 killing process with pid 74670 00:17:20.326 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:20.326 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:20.326 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74670' 00:17:20.326 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74670 00:17:20.326 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74670 00:17:21.704 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:17:21.704 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:21.704 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:21.704 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.704 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:21.704 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=74931 00:17:21.704 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 74931 00:17:21.704 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74931 ']' 00:17:21.704 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.704 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:21.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.704 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.704 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:21.704 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.704 [2024-09-28 01:30:17.310702] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:21.704 [2024-09-28 01:30:17.310882] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.704 [2024-09-28 01:30:17.476719] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.963 [2024-09-28 01:30:17.649619] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.963 [2024-09-28 01:30:17.649923] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.963 [2024-09-28 01:30:17.649957] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.963 [2024-09-28 01:30:17.649975] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.963 [2024-09-28 01:30:17.649987] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.963 [2024-09-28 01:30:17.650025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.963 [2024-09-28 01:30:17.826111] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:22.530 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:22.530 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:22.530 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:22.530 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:22.530 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.530 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.530 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.BKxIJCZRj7 00:17:22.530 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:22.530 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.BKxIJCZRj7 00:17:22.530 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:22.530 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:22.530 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:22.530 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:22.530 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.BKxIJCZRj7 00:17:22.530 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BKxIJCZRj7 00:17:22.530 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:22.790 [2024-09-28 01:30:18.595065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.790 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:23.049 01:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:23.308 [2024-09-28 01:30:19.203405] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:23.308 [2024-09-28 01:30:19.203859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:23.309 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:23.876 malloc0 00:17:23.876 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:24.134 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BKxIJCZRj7 00:17:24.393 [2024-09-28 01:30:20.089250] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BKxIJCZRj7': 0100666 00:17:24.393 [2024-09-28 01:30:20.089342] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:24.393 request: 00:17:24.393 { 00:17:24.393 "name": "key0", 00:17:24.393 "path": "/tmp/tmp.BKxIJCZRj7", 00:17:24.393 "method": "keyring_file_add_key", 00:17:24.393 "req_id": 1 00:17:24.393 } 00:17:24.393 Got JSON-RPC error response 00:17:24.393 response: 00:17:24.393 { 00:17:24.393 "code": -1, 00:17:24.393 "message": "Operation not permitted" 00:17:24.393 } 00:17:24.393 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:24.652 [2024-09-28 01:30:20.341313] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:17:24.652 [2024-09-28 01:30:20.341406] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:24.652 request: 00:17:24.652 { 00:17:24.652 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.652 "host": "nqn.2016-06.io.spdk:host1", 00:17:24.652 "psk": "key0", 00:17:24.652 "method": "nvmf_subsystem_add_host", 00:17:24.652 "req_id": 1 00:17:24.652 } 00:17:24.652 Got JSON-RPC error response 00:17:24.652 response: 00:17:24.652 { 00:17:24.652 "code": -32603, 00:17:24.652 "message": "Internal error" 00:17:24.652 } 00:17:24.652 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:24.652 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:24.652 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:24.652 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:24.652 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 74931 00:17:24.652 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74931 ']' 00:17:24.652 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74931 00:17:24.652 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:24.652 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:24.652 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74931 00:17:24.652 killing process with pid 74931 00:17:24.652 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:24.652 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:24.652 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74931' 00:17:24.652 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74931 00:17:24.652 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74931 00:17:26.030 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.BKxIJCZRj7 00:17:26.030 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:17:26.030 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:26.030 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:26.030 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.030 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=75018 00:17:26.030 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:26.030 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 75018 00:17:26.030 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75018 ']' 00:17:26.030 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.030 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:26.030 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.030 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:26.030 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.030 [2024-09-28 01:30:21.701574] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:26.030 [2024-09-28 01:30:21.701752] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.030 [2024-09-28 01:30:21.876155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.299 [2024-09-28 01:30:22.041261] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.299 [2024-09-28 01:30:22.041644] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.299 [2024-09-28 01:30:22.041678] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.299 [2024-09-28 01:30:22.041696] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.299 [2024-09-28 01:30:22.041712] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.299 [2024-09-28 01:30:22.041752] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.299 [2024-09-28 01:30:22.216592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:26.881 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:26.881 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:26.881 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:26.881 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:26.881 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.881 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.882 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.BKxIJCZRj7 00:17:26.882 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BKxIJCZRj7 00:17:26.882 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:27.140 [2024-09-28 01:30:22.921719] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.140 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:27.398 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:27.657 [2024-09-28 01:30:23.458004] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:27.657 [2024-09-28 01:30:23.458625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:27.657 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:27.916 malloc0 00:17:27.916 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:28.175 01:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BKxIJCZRj7 00:17:28.434 01:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:28.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:28.692 01:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=75075 00:17:28.692 01:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:28.692 01:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 75075 /var/tmp/bdevperf.sock 00:17:28.692 01:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75075 ']' 00:17:28.692 01:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:28.692 01:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:28.692 01:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:28.692 01:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:28.692 01:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:28.692 01:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:28.951 [2024-09-28 01:30:24.669768] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:28.951 [2024-09-28 01:30:24.669916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75075 ] 00:17:28.951 [2024-09-28 01:30:24.835828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.210 [2024-09-28 01:30:25.069495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.468 [2024-09-28 01:30:25.237993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:29.727 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:29.727 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:29.727 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BKxIJCZRj7 00:17:29.987 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:30.245 [2024-09-28 01:30:26.159374] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:30.505 TLSTESTn1 00:17:30.505 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:30.765 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:17:30.765 "subsystems": [ 00:17:30.765 { 00:17:30.765 "subsystem": "keyring", 00:17:30.765 "config": [ 00:17:30.765 { 00:17:30.765 "method": "keyring_file_add_key", 00:17:30.765 "params": { 00:17:30.765 "name": "key0", 00:17:30.765 "path": "/tmp/tmp.BKxIJCZRj7" 00:17:30.765 } 00:17:30.765 } 00:17:30.765 ] 00:17:30.765 }, 00:17:30.765 { 00:17:30.765 "subsystem": "iobuf", 00:17:30.765 "config": [ 00:17:30.765 { 00:17:30.765 "method": "iobuf_set_options", 00:17:30.765 "params": { 00:17:30.765 "small_pool_count": 8192, 00:17:30.765 "large_pool_count": 1024, 00:17:30.765 "small_bufsize": 8192, 00:17:30.765 "large_bufsize": 135168 00:17:30.765 } 00:17:30.765 } 00:17:30.765 ] 00:17:30.765 }, 00:17:30.765 { 00:17:30.765 "subsystem": "sock", 00:17:30.765 "config": [ 00:17:30.765 { 00:17:30.765 "method": "sock_set_default_impl", 00:17:30.765 "params": { 00:17:30.765 "impl_name": "uring" 00:17:30.765 } 00:17:30.765 }, 00:17:30.765 { 00:17:30.765 "method": "sock_impl_set_options", 00:17:30.765 "params": { 00:17:30.765 "impl_name": "ssl", 00:17:30.765 "recv_buf_size": 4096, 00:17:30.765 "send_buf_size": 4096, 00:17:30.765 "enable_recv_pipe": true, 00:17:30.765 "enable_quickack": false, 00:17:30.765 "enable_placement_id": 0, 00:17:30.765 "enable_zerocopy_send_server": true, 00:17:30.765 "enable_zerocopy_send_client": false, 00:17:30.765 "zerocopy_threshold": 0, 00:17:30.765 "tls_version": 0, 00:17:30.765 "enable_ktls": false 00:17:30.765 } 00:17:30.765 }, 00:17:30.765 { 00:17:30.765 "method": "sock_impl_set_options", 00:17:30.765 "params": { 00:17:30.765 "impl_name": "posix", 00:17:30.765 "recv_buf_size": 2097152, 00:17:30.765 "send_buf_size": 2097152, 00:17:30.765 "enable_recv_pipe": true, 00:17:30.765 "enable_quickack": false, 00:17:30.765 "enable_placement_id": 0, 00:17:30.765 "enable_zerocopy_send_server": true, 00:17:30.765 "enable_zerocopy_send_client": false, 00:17:30.765 "zerocopy_threshold": 0, 00:17:30.765 "tls_version": 0, 00:17:30.765 "enable_ktls": false 00:17:30.765 } 00:17:30.765 }, 00:17:30.765 { 00:17:30.765 "method": "sock_impl_set_options", 00:17:30.765 "params": { 00:17:30.765 "impl_name": "uring", 00:17:30.765 "recv_buf_size": 2097152, 00:17:30.765 "send_buf_size": 2097152, 00:17:30.765 "enable_recv_pipe": true, 00:17:30.765 "enable_quickack": false, 00:17:30.765 "enable_placement_id": 0, 00:17:30.765 "enable_zerocopy_send_server": false, 00:17:30.765 "enable_zerocopy_send_client": false, 00:17:30.765 "zerocopy_threshold": 0, 00:17:30.765 "tls_version": 0, 00:17:30.765 "enable_ktls": false 00:17:30.765 } 00:17:30.765 } 00:17:30.765 ] 00:17:30.765 }, 00:17:30.765 { 00:17:30.765 "subsystem": "vmd", 00:17:30.765 "config": [] 00:17:30.765 }, 00:17:30.765 { 00:17:30.765 "subsystem": "accel", 00:17:30.765 "config": [ 00:17:30.765 { 00:17:30.765 "method": "accel_set_options", 00:17:30.765 "params": { 00:17:30.765 "small_cache_size": 128, 00:17:30.765 "large_cache_size": 16, 00:17:30.765 "task_count": 2048, 00:17:30.765 "sequence_count": 2048, 00:17:30.765 "buf_count": 2048 00:17:30.765 } 00:17:30.765 } 00:17:30.765 ] 00:17:30.765 }, 00:17:30.765 { 00:17:30.765 "subsystem": "bdev", 00:17:30.765 "config": [ 00:17:30.765 { 00:17:30.765 "method": "bdev_set_options", 00:17:30.765 "params": { 00:17:30.765 "bdev_io_pool_size": 65535, 00:17:30.765 "bdev_io_cache_size": 256, 00:17:30.765 "bdev_auto_examine": true, 00:17:30.765 "iobuf_small_cache_size": 128, 00:17:30.765 "iobuf_large_cache_size": 16 00:17:30.765 } 00:17:30.765 }, 00:17:30.765 { 00:17:30.765 "method": "bdev_raid_set_options", 00:17:30.765 "params": { 00:17:30.765 "process_window_size_kb": 1024, 00:17:30.765 "process_max_bandwidth_mb_sec": 0 00:17:30.765 } 00:17:30.765 }, 00:17:30.765 { 00:17:30.765 "method": "bdev_iscsi_set_options", 00:17:30.765 "params": { 00:17:30.765 "timeout_sec": 30 00:17:30.765 } 00:17:30.765 }, 00:17:30.765 { 00:17:30.765 "method": "bdev_nvme_set_options", 00:17:30.765 "params": { 00:17:30.765 "action_on_timeout": "none", 00:17:30.765 "timeout_us": 0, 00:17:30.765 "timeout_admin_us": 0, 00:17:30.765 "keep_alive_timeout_ms": 10000, 00:17:30.765 "arbitration_burst": 0, 00:17:30.766 "low_priority_weight": 0, 00:17:30.766 "medium_priority_weight": 0, 00:17:30.766 "high_priority_weight": 0, 00:17:30.766 "nvme_adminq_poll_period_us": 10000, 00:17:30.766 "nvme_ioq_poll_period_us": 0, 00:17:30.766 "io_queue_requests": 0, 00:17:30.766 "delay_cmd_submit": true, 00:17:30.766 "transport_retry_count": 4, 00:17:30.766 "bdev_retry_count": 3, 00:17:30.766 "transport_ack_timeout": 0, 00:17:30.766 "ctrlr_loss_timeout_sec": 0, 00:17:30.766 "reconnect_delay_sec": 0, 00:17:30.766 "fast_io_fail_timeout_sec": 0, 00:17:30.766 "disable_auto_failback": false, 00:17:30.766 "generate_uuids": false, 00:17:30.766 "transport_tos": 0, 00:17:30.766 "nvme_error_stat": false, 00:17:30.766 "rdma_srq_size": 0, 00:17:30.766 "io_path_stat": false, 00:17:30.766 "allow_accel_sequence": false, 00:17:30.766 "rdma_max_cq_size": 0, 00:17:30.766 "rdma_cm_event_timeout_ms": 0, 00:17:30.766 "dhchap_digests": [ 00:17:30.766 "sha256", 00:17:30.766 "sha384", 00:17:30.766 "sha512" 00:17:30.766 ], 00:17:30.766 "dhchap_dhgroups": [ 00:17:30.766 "null", 00:17:30.766 "ffdhe2048", 00:17:30.766 "ffdhe3072", 00:17:30.766 "ffdhe4096", 00:17:30.766 "ffdhe6144", 00:17:30.766 "ffdhe8192" 00:17:30.766 ] 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "bdev_nvme_set_hotplug", 00:17:30.766 "params": { 00:17:30.766 "period_us": 100000, 00:17:30.766 "enable": false 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "bdev_malloc_create", 00:17:30.766 "params": { 00:17:30.766 "name": "malloc0", 00:17:30.766 "num_blocks": 8192, 00:17:30.766 "block_size": 4096, 00:17:30.766 "physical_block_size": 4096, 00:17:30.766 "uuid": "828cdba9-6732-4af3-8879-219eb98bf038", 00:17:30.766 "optimal_io_boundary": 0, 00:17:30.766 "md_size": 0, 00:17:30.766 "dif_type": 0, 00:17:30.766 "dif_is_head_of_md": false, 00:17:30.766 "dif_pi_format": 0 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "bdev_wait_for_examine" 00:17:30.766 } 00:17:30.766 ] 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "subsystem": "nbd", 00:17:30.766 "config": [] 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "subsystem": "scheduler", 00:17:30.766 "config": [ 00:17:30.766 { 00:17:30.766 "method": "framework_set_scheduler", 00:17:30.766 "params": { 00:17:30.766 "name": "static" 00:17:30.766 } 00:17:30.766 } 00:17:30.766 ] 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "subsystem": "nvmf", 00:17:30.766 "config": [ 00:17:30.766 { 00:17:30.766 "method": "nvmf_set_config", 00:17:30.766 "params": { 00:17:30.766 "discovery_filter": "match_any", 00:17:30.766 "admin_cmd_passthru": { 00:17:30.766 "identify_ctrlr": false 00:17:30.766 }, 00:17:30.766 "dhchap_digests": [ 00:17:30.766 "sha256", 00:17:30.766 "sha384", 00:17:30.766 "sha512" 00:17:30.766 ], 00:17:30.766 "dhchap_dhgroups": [ 00:17:30.766 "null", 00:17:30.766 "ffdhe2048", 00:17:30.766 "ffdhe3072", 00:17:30.766 "ffdhe4096", 00:17:30.766 "ffdhe6144", 00:17:30.766 "ffdhe8192" 00:17:30.766 ] 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "nvmf_set_max_subsystems", 00:17:30.766 "params": { 00:17:30.766 "max_subsystems": 1024 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "nvmf_set_crdt", 00:17:30.766 "params": { 00:17:30.766 "crdt1": 0, 00:17:30.766 "crdt2": 0, 00:17:30.766 "crdt3": 0 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "nvmf_create_transport", 00:17:30.766 "params": { 00:17:30.766 "trtype": "TCP", 00:17:30.766 "max_queue_depth": 128, 00:17:30.766 "max_io_qpairs_per_ctrlr": 127, 00:17:30.766 "in_capsule_data_size": 4096, 00:17:30.766 "max_io_size": 131072, 00:17:30.766 "io_unit_size": 131072, 00:17:30.766 "max_aq_depth": 128, 00:17:30.766 "num_shared_buffers": 511, 00:17:30.766 "buf_cache_size": 4294967295, 00:17:30.766 "dif_insert_or_strip": false, 00:17:30.766 "zcopy": false, 00:17:30.766 "c2h_success": false, 00:17:30.766 "sock_priority": 0, 00:17:30.766 "abort_timeout_sec": 1, 00:17:30.766 "ack_timeout": 0, 00:17:30.766 "data_wr_pool_size": 0 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "nvmf_create_subsystem", 00:17:30.766 "params": { 00:17:30.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.766 "allow_any_host": false, 00:17:30.766 "serial_number": "SPDK00000000000001", 00:17:30.766 "model_number": "SPDK bdev Controller", 00:17:30.766 "max_namespaces": 10, 00:17:30.766 "min_cntlid": 1, 00:17:30.766 "max_cntlid": 65519, 00:17:30.766 "ana_reporting": false 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "nvmf_subsystem_add_host", 00:17:30.766 "params": { 00:17:30.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.766 "host": "nqn.2016-06.io.spdk:host1", 00:17:30.766 "psk": "key0" 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "nvmf_subsystem_add_ns", 00:17:30.766 "params": { 00:17:30.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.766 "namespace": { 00:17:30.766 "nsid": 1, 00:17:30.766 "bdev_name": "malloc0", 00:17:30.766 "nguid": "828CDBA967324AF38879219EB98BF038", 00:17:30.766 "uuid": "828cdba9-6732-4af3-8879-219eb98bf038", 00:17:30.766 "no_auto_visible": false 00:17:30.766 } 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "nvmf_subsystem_add_listener", 00:17:30.766 "params": { 00:17:30.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.766 "listen_address": { 00:17:30.766 "trtype": "TCP", 00:17:30.766 "adrfam": "IPv4", 00:17:30.766 "traddr": "10.0.0.3", 00:17:30.766 "trsvcid": "4420" 00:17:30.766 }, 00:17:30.766 "secure_channel": true 00:17:30.766 } 00:17:30.766 } 00:17:30.766 ] 00:17:30.766 } 00:17:30.766 ] 00:17:30.766 }' 00:17:30.766 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:31.026 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:17:31.026 "subsystems": [ 00:17:31.026 { 00:17:31.026 "subsystem": "keyring", 00:17:31.026 "config": [ 00:17:31.026 { 00:17:31.026 "method": "keyring_file_add_key", 00:17:31.026 "params": { 00:17:31.026 "name": "key0", 00:17:31.026 "path": "/tmp/tmp.BKxIJCZRj7" 00:17:31.026 } 00:17:31.026 } 00:17:31.026 ] 00:17:31.026 }, 00:17:31.026 { 00:17:31.026 "subsystem": "iobuf", 00:17:31.026 "config": [ 00:17:31.026 { 00:17:31.026 "method": "iobuf_set_options", 00:17:31.026 "params": { 00:17:31.026 "small_pool_count": 8192, 00:17:31.026 "large_pool_count": 1024, 00:17:31.026 "small_bufsize": 8192, 00:17:31.026 "large_bufsize": 135168 00:17:31.026 } 00:17:31.026 } 00:17:31.026 ] 00:17:31.026 }, 00:17:31.026 { 00:17:31.026 "subsystem": "sock", 00:17:31.026 "config": [ 00:17:31.026 { 00:17:31.026 "method": "sock_set_default_impl", 00:17:31.026 "params": { 00:17:31.026 "impl_name": "uring" 00:17:31.026 } 00:17:31.026 }, 00:17:31.026 { 00:17:31.026 "method": "sock_impl_set_options", 00:17:31.026 "params": { 00:17:31.026 "impl_name": "ssl", 00:17:31.026 "recv_buf_size": 4096, 00:17:31.026 "send_buf_size": 4096, 00:17:31.026 "enable_recv_pipe": true, 00:17:31.026 "enable_quickack": false, 00:17:31.026 "enable_placement_id": 0, 00:17:31.026 "enable_zerocopy_send_server": true, 00:17:31.026 "enable_zerocopy_send_client": false, 00:17:31.026 "zerocopy_threshold": 0, 00:17:31.026 "tls_version": 0, 00:17:31.026 "enable_ktls": false 00:17:31.026 } 00:17:31.026 }, 00:17:31.026 { 00:17:31.026 "method": "sock_impl_set_options", 00:17:31.027 "params": { 00:17:31.027 "impl_name": "posix", 00:17:31.027 "recv_buf_size": 2097152, 00:17:31.027 "send_buf_size": 2097152, 00:17:31.027 "enable_recv_pipe": true, 00:17:31.027 "enable_quickack": false, 00:17:31.027 "enable_placement_id": 0, 00:17:31.027 "enable_zerocopy_send_server": true, 00:17:31.027 "enable_zerocopy_send_client": false, 00:17:31.027 "zerocopy_threshold": 0, 00:17:31.027 "tls_version": 0, 00:17:31.027 "enable_ktls": false 00:17:31.027 } 00:17:31.027 }, 00:17:31.027 { 00:17:31.027 "method": "sock_impl_set_options", 00:17:31.027 "params": { 00:17:31.027 "impl_name": "uring", 00:17:31.027 "recv_buf_size": 2097152, 00:17:31.027 "send_buf_size": 2097152, 00:17:31.027 "enable_recv_pipe": true, 00:17:31.027 "enable_quickack": false, 00:17:31.027 "enable_placement_id": 0, 00:17:31.027 "enable_zerocopy_send_server": false, 00:17:31.027 "enable_zerocopy_send_client": false, 00:17:31.027 "zerocopy_threshold": 0, 00:17:31.027 "tls_version": 0, 00:17:31.027 "enable_ktls": false 00:17:31.027 } 00:17:31.027 } 00:17:31.027 ] 00:17:31.027 }, 00:17:31.027 { 00:17:31.027 "subsystem": "vmd", 00:17:31.027 "config": [] 00:17:31.027 }, 00:17:31.027 { 00:17:31.027 "subsystem": "accel", 00:17:31.027 "config": [ 00:17:31.027 { 00:17:31.027 "method": "accel_set_options", 00:17:31.027 "params": { 00:17:31.027 "small_cache_size": 128, 00:17:31.027 "large_cache_size": 16, 00:17:31.027 "task_count": 2048, 00:17:31.027 "sequence_count": 2048, 00:17:31.027 "buf_count": 2048 00:17:31.027 } 00:17:31.027 } 00:17:31.027 ] 00:17:31.027 }, 00:17:31.027 { 00:17:31.027 "subsystem": "bdev", 00:17:31.027 "config": [ 00:17:31.027 { 00:17:31.027 "method": "bdev_set_options", 00:17:31.027 "params": { 00:17:31.027 "bdev_io_pool_size": 65535, 00:17:31.027 "bdev_io_cache_size": 256, 00:17:31.027 "bdev_auto_examine": true, 00:17:31.027 "iobuf_small_cache_size": 128, 00:17:31.027 "iobuf_large_cache_size": 16 00:17:31.027 } 00:17:31.027 }, 00:17:31.027 { 00:17:31.027 "method": "bdev_raid_set_options", 00:17:31.027 "params": { 00:17:31.027 "process_window_size_kb": 1024, 00:17:31.027 "process_max_bandwidth_mb_sec": 0 00:17:31.027 } 00:17:31.027 }, 00:17:31.027 { 00:17:31.027 "method": "bdev_iscsi_set_options", 00:17:31.027 "params": { 00:17:31.027 "timeout_sec": 30 00:17:31.027 } 00:17:31.027 }, 00:17:31.027 { 00:17:31.027 "method": "bdev_nvme_set_options", 00:17:31.027 "params": { 00:17:31.027 "action_on_timeout": "none", 00:17:31.027 "timeout_us": 0, 00:17:31.027 "timeout_admin_us": 0, 00:17:31.027 "keep_alive_timeout_ms": 10000, 00:17:31.027 "arbitration_burst": 0, 00:17:31.027 "low_priority_weight": 0, 00:17:31.027 "medium_priority_weight": 0, 00:17:31.027 "high_priority_weight": 0, 00:17:31.027 "nvme_adminq_poll_period_us": 10000, 00:17:31.027 "nvme_ioq_poll_period_us": 0, 00:17:31.027 "io_queue_requests": 512, 00:17:31.027 "delay_cmd_submit": true, 00:17:31.027 "transport_retry_count": 4, 00:17:31.027 "bdev_retry_count": 3, 00:17:31.027 "transport_ack_timeout": 0, 00:17:31.027 "ctrlr_loss_timeout_sec": 0, 00:17:31.027 "reconnect_delay_sec": 0, 00:17:31.027 "fast_io_fail_timeout_sec": 0, 00:17:31.027 "disable_auto_failback": false, 00:17:31.027 "generate_uuids": false, 00:17:31.027 "transport_tos": 0, 00:17:31.027 "nvme_error_stat": false, 00:17:31.027 "rdma_srq_size": 0, 00:17:31.027 "io_path_stat": false, 00:17:31.027 "allow_accel_sequence": false, 00:17:31.027 "rdma_max_cq_size": 0, 00:17:31.027 "rdma_cm_event_timeout_ms": 0, 00:17:31.027 "dhchap_digests": [ 00:17:31.027 "sha256", 00:17:31.027 "sha384", 00:17:31.027 "sha512" 00:17:31.027 ], 00:17:31.027 "dhchap_dhgroups": [ 00:17:31.027 "null", 00:17:31.027 "ffdhe2048", 00:17:31.027 "ffdhe3072", 00:17:31.027 "ffdhe4096", 00:17:31.027 "ffdhe6144", 00:17:31.027 "ffdhe8192" 00:17:31.027 ] 00:17:31.027 } 00:17:31.027 }, 00:17:31.027 { 00:17:31.027 "method": "bdev_nvme_attach_controller", 00:17:31.027 "params": { 00:17:31.027 "name": "TLSTEST", 00:17:31.027 "trtype": "TCP", 00:17:31.027 "adrfam": "IPv4", 00:17:31.027 "traddr": "10.0.0.3", 00:17:31.027 "trsvcid": "4420", 00:17:31.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.027 "prchk_reftag": false, 00:17:31.027 "prchk_guard": false, 00:17:31.027 "ctrlr_loss_timeout_sec": 0, 00:17:31.027 "reconnect_delay_sec": 0, 00:17:31.027 "fast_io_fail_timeout_sec": 0, 00:17:31.027 "psk": "key0", 00:17:31.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.027 "hdgst": false, 00:17:31.027 "ddgst": false 00:17:31.027 } 00:17:31.027 }, 00:17:31.027 { 00:17:31.027 "method": "bdev_nvme_set_hotplug", 00:17:31.027 "params": { 00:17:31.027 "period_us": 100000, 00:17:31.027 "enable": false 00:17:31.027 } 00:17:31.027 }, 00:17:31.027 { 00:17:31.027 "method": "bdev_wait_for_examine" 00:17:31.027 } 00:17:31.027 ] 00:17:31.027 }, 00:17:31.027 { 00:17:31.027 "subsystem": "nbd", 00:17:31.027 "config": [] 00:17:31.027 } 00:17:31.027 ] 00:17:31.027 }' 00:17:31.027 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 75075 00:17:31.027 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75075 ']' 00:17:31.027 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75075 00:17:31.286 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:31.286 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:31.286 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75075 00:17:31.286 killing process with pid 75075 00:17:31.286 Received shutdown signal, test time was about 10.000000 seconds 00:17:31.286 00:17:31.286 Latency(us) 00:17:31.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.286 =================================================================================================================== 00:17:31.286 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:31.286 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:31.286 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:31.286 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75075' 00:17:31.286 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75075 00:17:31.286 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75075 00:17:32.224 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 75018 00:17:32.224 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75018 ']' 00:17:32.224 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75018 00:17:32.224 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:32.224 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:32.224 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75018 00:17:32.224 killing process with pid 75018 00:17:32.224 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:32.224 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:32.224 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75018' 00:17:32.224 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75018 00:17:32.224 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75018 00:17:33.161 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:33.161 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:33.161 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:33.161 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.161 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:17:33.161 "subsystems": [ 00:17:33.161 { 00:17:33.161 "subsystem": "keyring", 00:17:33.161 "config": [ 00:17:33.161 { 00:17:33.161 "method": "keyring_file_add_key", 00:17:33.161 "params": { 00:17:33.161 "name": "key0", 00:17:33.161 "path": "/tmp/tmp.BKxIJCZRj7" 00:17:33.161 } 00:17:33.161 } 00:17:33.161 ] 00:17:33.161 }, 00:17:33.161 { 00:17:33.161 "subsystem": "iobuf", 00:17:33.161 "config": [ 00:17:33.161 { 00:17:33.161 "method": "iobuf_set_options", 00:17:33.161 "params": { 00:17:33.161 "small_pool_count": 8192, 00:17:33.161 "large_pool_count": 1024, 00:17:33.161 "small_bufsize": 8192, 00:17:33.161 "large_bufsize": 135168 00:17:33.161 } 00:17:33.161 } 00:17:33.161 ] 00:17:33.161 }, 00:17:33.161 { 00:17:33.161 "subsystem": "sock", 00:17:33.161 "config": [ 00:17:33.161 { 00:17:33.161 "method": "sock_set_default_impl", 00:17:33.161 "params": { 00:17:33.161 "impl_name": "uring" 00:17:33.161 } 00:17:33.161 }, 00:17:33.161 { 00:17:33.161 "method": "sock_impl_set_options", 00:17:33.161 "params": { 00:17:33.161 "impl_name": "ssl", 00:17:33.161 "recv_buf_size": 4096, 00:17:33.161 "send_buf_size": 4096, 00:17:33.161 "enable_recv_pipe": true, 00:17:33.161 "enable_quickack": false, 00:17:33.161 "enable_placement_id": 0, 00:17:33.162 "enable_zerocopy_send_server": true, 00:17:33.162 "enable_zerocopy_send_client": false, 00:17:33.162 "zerocopy_threshold": 0, 00:17:33.162 "tls_version": 0, 00:17:33.162 "enable_ktls": false 00:17:33.162 } 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "method": "sock_impl_set_options", 00:17:33.162 "params": { 00:17:33.162 "impl_name": "posix", 00:17:33.162 "recv_buf_size": 2097152, 00:17:33.162 "send_buf_size": 2097152, 00:17:33.162 "enable_recv_pipe": true, 00:17:33.162 "enable_quickack": false, 00:17:33.162 "enable_placement_id": 0, 00:17:33.162 "enable_zerocopy_send_server": true, 00:17:33.162 "enable_zerocopy_send_client": false, 00:17:33.162 "zerocopy_threshold": 0, 00:17:33.162 "tls_version": 0, 00:17:33.162 "enable_ktls": false 00:17:33.162 } 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "method": "sock_impl_set_options", 00:17:33.162 "params": { 00:17:33.162 "impl_name": "uring", 00:17:33.162 "recv_buf_size": 2097152, 00:17:33.162 "send_buf_size": 2097152, 00:17:33.162 "enable_recv_pipe": true, 00:17:33.162 "enable_quickack": false, 00:17:33.162 "enable_placement_id": 0, 00:17:33.162 "enable_zerocopy_send_server": false, 00:17:33.162 "enable_zerocopy_send_client": false, 00:17:33.162 "zerocopy_threshold": 0, 00:17:33.162 "tls_version": 0, 00:17:33.162 "enable_ktls": false 00:17:33.162 } 00:17:33.162 } 00:17:33.162 ] 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "subsystem": "vmd", 00:17:33.162 "config": [] 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "subsystem": "accel", 00:17:33.162 "config": [ 00:17:33.162 { 00:17:33.162 "method": "accel_set_options", 00:17:33.162 "params": { 00:17:33.162 "small_cache_size": 128, 00:17:33.162 "large_cache_size": 16, 00:17:33.162 "task_count": 2048, 00:17:33.162 "sequence_count": 2048, 00:17:33.162 "buf_count": 2048 00:17:33.162 } 00:17:33.162 } 00:17:33.162 ] 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "subsystem": "bdev", 00:17:33.162 "config": [ 00:17:33.162 { 00:17:33.162 "method": "bdev_set_options", 00:17:33.162 "params": { 00:17:33.162 "bdev_io_pool_size": 65535, 00:17:33.162 "bdev_io_cache_size": 256, 00:17:33.162 "bdev_auto_examine": true, 00:17:33.162 "iobuf_small_cache_size": 128, 00:17:33.162 "iobuf_large_cache_size": 16 00:17:33.162 } 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "method": "bdev_raid_set_options", 00:17:33.162 "params": { 00:17:33.162 "process_window_size_kb": 1024, 00:17:33.162 "process_max_bandwidth_mb_sec": 0 00:17:33.162 } 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "method": "bdev_iscsi_set_options", 00:17:33.162 "params": { 00:17:33.162 "timeout_sec": 30 00:17:33.162 } 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "method": "bdev_nvme_set_options", 00:17:33.162 "params": { 00:17:33.162 "action_on_timeout": "none", 00:17:33.162 "timeout_us": 0, 00:17:33.162 "timeout_admin_us": 0, 00:17:33.162 "keep_alive_timeout_ms": 10000, 00:17:33.162 "arbitration_burst": 0, 00:17:33.162 "low_priority_weight": 0, 00:17:33.162 "medium_priority_weight": 0, 00:17:33.162 "high_priority_weight": 0, 00:17:33.162 "nvme_adminq_poll_period_us": 10000, 00:17:33.162 "nvme_ioq_poll_period_us": 0, 00:17:33.162 "io_queue_requests": 0, 00:17:33.162 "delay_cmd_submit": true, 00:17:33.162 "transport_retry_count": 4, 00:17:33.162 "bdev_retry_count": 3, 00:17:33.162 "transport_ack_timeout": 0, 00:17:33.162 "ctrlr_loss_timeout_sec": 0, 00:17:33.162 "reconnect_delay_sec": 0, 00:17:33.162 "fast_io_fail_timeout_sec": 0, 00:17:33.162 "disable_auto_failback": false, 00:17:33.162 "generate_uuids": false, 00:17:33.162 "transport_tos": 0, 00:17:33.162 "nvme_error_stat": false, 00:17:33.162 "rdma_srq_size": 0, 00:17:33.162 "io_path_stat": false, 00:17:33.162 "allow_accel_sequence": false, 00:17:33.162 "rdma_max_cq_size": 0, 00:17:33.162 "rdma_cm_event_timeout_ms": 0, 00:17:33.162 "dhchap_digests": [ 00:17:33.162 "sha256", 00:17:33.162 "sha384", 00:17:33.162 "sha512" 00:17:33.162 ], 00:17:33.162 "dhchap_dhgroups": [ 00:17:33.162 "null", 00:17:33.162 "ffdhe2048", 00:17:33.162 "ffdhe3072", 00:17:33.162 "ffdhe4096", 00:17:33.162 "ffdhe6144", 00:17:33.162 "ffdhe8192" 00:17:33.162 ] 00:17:33.162 } 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "method": "bdev_nvme_set_hotplug", 00:17:33.162 "params": { 00:17:33.162 "period_us": 100000, 00:17:33.162 "enable": false 00:17:33.162 } 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "method": "bdev_malloc_create", 00:17:33.162 "params": { 00:17:33.162 "name": "malloc0", 00:17:33.162 "num_blocks": 8192, 00:17:33.162 "block_size": 4096, 00:17:33.162 "physical_block_size": 4096, 00:17:33.162 "uuid": "828cdba9-6732-4af3-8879-219eb98bf038", 00:17:33.162 "optimal_io_boundary": 0, 00:17:33.162 "md_size": 0, 00:17:33.162 "dif_type": 0, 00:17:33.162 "dif_is_head_of_md": false, 00:17:33.162 "dif_pi_format": 0 00:17:33.162 } 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "method": "bdev_wait_for_examine" 00:17:33.162 } 00:17:33.162 ] 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "subsystem": "nbd", 00:17:33.162 "config": [] 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "subsystem": "scheduler", 00:17:33.162 "config": [ 00:17:33.162 { 00:17:33.162 "method": "framework_set_scheduler", 00:17:33.162 "params": { 00:17:33.162 "name": "static" 00:17:33.162 } 00:17:33.162 } 00:17:33.162 ] 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "subsystem": "nvmf", 00:17:33.162 "config": [ 00:17:33.162 { 00:17:33.162 "method": "nvmf_set_config", 00:17:33.162 "params": { 00:17:33.162 "discovery_filter": "match_any", 00:17:33.162 "admin_cmd_passthru": { 00:17:33.162 "identify_ctrlr": false 00:17:33.162 }, 00:17:33.162 "dhchap_digests": [ 00:17:33.162 "sha256", 00:17:33.162 "sha384", 00:17:33.162 "sha512" 00:17:33.162 ], 00:17:33.162 "dhchap_dhgroups": [ 00:17:33.162 "null", 00:17:33.162 "ffdhe2048", 00:17:33.162 "ffdhe3072", 00:17:33.162 "ffdhe4096", 00:17:33.162 "ffdhe6144", 00:17:33.162 "ffdhe8192" 00:17:33.162 ] 00:17:33.162 } 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "method": "nvmf_set_max_subsystems", 00:17:33.162 "params": { 00:17:33.162 "max_subsystems": 1024 00:17:33.162 } 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "method": "nvmf_set_crdt", 00:17:33.162 "params": { 00:17:33.162 "crdt1": 0, 00:17:33.162 "crdt2": 0, 00:17:33.162 "crdt3": 0 00:17:33.162 } 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "method": "nvmf_create_transport", 00:17:33.162 "params": { 00:17:33.162 "trtype": "TCP", 00:17:33.162 "max_queue_depth": 128, 00:17:33.162 "max_io_qpairs_per_ctrlr": 127, 00:17:33.162 "in_capsule_data_size": 4096, 00:17:33.162 "max_io_size": 131072, 00:17:33.162 "io_unit_size": 131072, 00:17:33.162 "max_aq_depth": 128, 00:17:33.162 "num_shared_buffers": 511, 00:17:33.162 "buf_cache_size": 4294967295, 00:17:33.162 "dif_insert_or_strip": false, 00:17:33.162 "zcopy": false, 00:17:33.162 "c2h_success": false, 00:17:33.162 "sock_priority": 0, 00:17:33.162 "abort_timeout_sec": 1, 00:17:33.162 "ack_timeout": 0, 00:17:33.162 "data_wr_pool_size": 0 00:17:33.162 } 00:17:33.162 }, 00:17:33.162 { 00:17:33.162 "method": "nvmf_create_subsystem", 00:17:33.162 "params": { 00:17:33.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.162 "allow_any_host": false, 00:17:33.162 "serial_number": "SPDK00000000000001", 00:17:33.162 "model_number": "SPDK bdev Controller", 00:17:33.162 "max_namespaces": 10, 00:17:33.162 "min_cntlid": 1, 00:17:33.162 "max_cntlid": 65519, 00:17:33.162 "ana_reporting": false 00:17:33.162 } 00:17:33.162 }, 00:17:33.162 { 00:17:33.163 "method": "nvmf_subsystem_add_host", 00:17:33.163 "params": { 00:17:33.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.163 "host": "nqn.2016-06.io.spdk:host1", 00:17:33.163 "psk": "key0" 00:17:33.163 } 00:17:33.163 }, 00:17:33.163 { 00:17:33.163 "method": "nvmf_subsystem_add_ns", 00:17:33.163 "params": { 00:17:33.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.163 "namespace": { 00:17:33.163 "nsid": 1, 00:17:33.163 "bdev_name": "malloc0", 00:17:33.163 "nguid": "828CDBA967324AF38879219EB98BF038", 00:17:33.163 "uuid": "828cdba9-6732-4af3-8879-219eb98bf038", 00:17:33.163 "no_auto_visible": false 00:17:33.163 } 00:17:33.163 } 00:17:33.163 }, 00:17:33.163 { 00:17:33.163 "method": "nvmf_subsystem_add_listener", 00:17:33.163 "params": { 00:17:33.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.163 "listen_address": { 00:17:33.163 "trtype": "TCP", 00:17:33.163 "adrfam": "IPv4", 00:17:33.163 "traddr": "10.0.0.3", 00:17:33.163 "trsvcid": "4420" 00:17:33.163 }, 00:17:33.163 "secure_channel": true 00:17:33.163 } 00:17:33.163 } 00:17:33.163 ] 00:17:33.163 } 00:17:33.163 ] 00:17:33.163 }' 00:17:33.163 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=75143 00:17:33.163 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:33.163 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 75143 00:17:33.163 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75143 ']' 00:17:33.163 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.163 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:33.163 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.163 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:33.163 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.422 [2024-09-28 01:30:29.171016] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:33.422 [2024-09-28 01:30:29.171494] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.422 [2024-09-28 01:30:29.343389] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.681 [2024-09-28 01:30:29.494356] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.681 [2024-09-28 01:30:29.494423] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.681 [2024-09-28 01:30:29.494441] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.681 [2024-09-28 01:30:29.494492] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.681 [2024-09-28 01:30:29.494504] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.681 [2024-09-28 01:30:29.494640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.940 [2024-09-28 01:30:29.771287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:34.199 [2024-09-28 01:30:29.923589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.199 [2024-09-28 01:30:29.955555] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:34.199 [2024-09-28 01:30:29.955784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:34.199 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:34.199 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:34.199 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:34.199 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:34.199 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:34.458 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.458 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=75176 00:17:34.458 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 75176 /var/tmp/bdevperf.sock 00:17:34.458 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75176 ']' 00:17:34.458 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:34.458 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:34.458 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:34.458 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:34.458 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.458 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:34.458 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:17:34.458 "subsystems": [ 00:17:34.458 { 00:17:34.458 "subsystem": "keyring", 00:17:34.459 "config": [ 00:17:34.459 { 00:17:34.459 "method": "keyring_file_add_key", 00:17:34.459 "params": { 00:17:34.459 "name": "key0", 00:17:34.459 "path": "/tmp/tmp.BKxIJCZRj7" 00:17:34.459 } 00:17:34.459 } 00:17:34.459 ] 00:17:34.459 }, 00:17:34.459 { 00:17:34.459 "subsystem": "iobuf", 00:17:34.459 "config": [ 00:17:34.459 { 00:17:34.459 "method": "iobuf_set_options", 00:17:34.459 "params": { 00:17:34.459 "small_pool_count": 8192, 00:17:34.459 "large_pool_count": 1024, 00:17:34.459 "small_bufsize": 8192, 00:17:34.459 "large_bufsize": 135168 00:17:34.459 } 00:17:34.459 } 00:17:34.459 ] 00:17:34.459 }, 00:17:34.459 { 00:17:34.459 "subsystem": "sock", 00:17:34.459 "config": [ 00:17:34.459 { 00:17:34.459 "method": "sock_set_default_impl", 00:17:34.459 "params": { 00:17:34.459 "impl_name": "uring" 00:17:34.459 } 00:17:34.459 }, 00:17:34.459 { 00:17:34.459 "method": "sock_impl_set_options", 00:17:34.459 "params": { 00:17:34.459 "impl_name": "ssl", 00:17:34.459 "recv_buf_size": 4096, 00:17:34.459 "send_buf_size": 4096, 00:17:34.459 "enable_recv_pipe": true, 00:17:34.459 "enable_quickack": false, 00:17:34.459 "enable_placement_id": 0, 00:17:34.459 "enable_zerocopy_send_server": true, 00:17:34.459 "enable_zerocopy_send_client": false, 00:17:34.459 "zerocopy_threshold": 0, 00:17:34.459 "tls_version": 0, 00:17:34.459 "enable_ktls": false 00:17:34.459 } 00:17:34.459 }, 00:17:34.459 { 00:17:34.459 "method": "sock_impl_set_options", 00:17:34.459 "params": { 00:17:34.459 "impl_name": "posix", 00:17:34.459 "recv_buf_size": 2097152, 00:17:34.459 "send_buf_size": 2097152, 00:17:34.459 "enable_recv_pipe": true, 00:17:34.459 "enable_quickack": false, 00:17:34.459 "enable_placement_id": 0, 00:17:34.459 "enable_zerocopy_send_server": true, 00:17:34.459 "enable_zerocopy_send_client": false, 00:17:34.459 "zerocopy_threshold": 0, 00:17:34.459 "tls_version": 0, 00:17:34.459 "enable_ktls": false 00:17:34.459 } 00:17:34.459 }, 00:17:34.459 { 00:17:34.459 "method": "sock_impl_set_options", 00:17:34.459 "params": { 00:17:34.459 "impl_name": "uring", 00:17:34.459 "recv_buf_size": 2097152, 00:17:34.459 "send_buf_size": 2097152, 00:17:34.459 "enable_recv_pipe": true, 00:17:34.459 "enable_quickack": false, 00:17:34.459 "enable_placement_id": 0, 00:17:34.459 "enable_zerocopy_send_server": false, 00:17:34.459 "enable_zerocopy_send_client": false, 00:17:34.459 "zerocopy_threshold": 0, 00:17:34.459 "tls_version": 0, 00:17:34.459 "enable_ktls": false 00:17:34.459 } 00:17:34.459 } 00:17:34.459 ] 00:17:34.459 }, 00:17:34.459 { 00:17:34.459 "subsystem": "vmd", 00:17:34.459 "config": [] 00:17:34.459 }, 00:17:34.459 { 00:17:34.459 "subsystem": "accel", 00:17:34.459 "config": [ 00:17:34.459 { 00:17:34.459 "method": "accel_set_options", 00:17:34.459 "params": { 00:17:34.459 "small_cache_size": 128, 00:17:34.459 "large_cache_size": 16, 00:17:34.459 "task_count": 2048, 00:17:34.459 "sequence_count": 2048, 00:17:34.459 "buf_count": 2048 00:17:34.459 } 00:17:34.459 } 00:17:34.459 ] 00:17:34.459 }, 00:17:34.459 { 00:17:34.459 "subsystem": "bdev", 00:17:34.459 "config": [ 00:17:34.459 { 00:17:34.459 "method": "bdev_set_options", 00:17:34.459 "params": { 00:17:34.459 "bdev_io_pool_size": 65535, 00:17:34.459 "bdev_io_cache_size": 256, 00:17:34.459 "bdev_auto_examine": true, 00:17:34.459 "iobuf_small_cache_size": 128, 00:17:34.459 "iobuf_large_cache_size": 16 00:17:34.459 } 00:17:34.459 }, 00:17:34.459 { 00:17:34.459 "method": "bdev_raid_set_options", 00:17:34.459 "params": { 00:17:34.459 "process_window_size_kb": 1024, 00:17:34.459 "process_max_bandwidth_mb_sec": 0 00:17:34.459 } 00:17:34.459 }, 00:17:34.459 { 00:17:34.459 "method": "bdev_iscsi_set_options", 00:17:34.459 "params": { 00:17:34.459 "timeout_sec": 30 00:17:34.459 } 00:17:34.459 }, 00:17:34.459 { 00:17:34.459 "method": "bdev_nvme_set_options", 00:17:34.459 "params": { 00:17:34.459 "action_on_timeout": "none", 00:17:34.459 "timeout_us": 0, 00:17:34.459 "timeout_admin_us": 0, 00:17:34.459 "keep_alive_timeout_ms": 10000, 00:17:34.459 "arbitration_burst": 0, 00:17:34.459 "low_priority_weight": 0, 00:17:34.459 "medium_priority_weight": 0, 00:17:34.459 "high_priority_weight": 0, 00:17:34.459 "nvme_adminq_poll_period_us": 10000, 00:17:34.459 "nvme_ioq_poll_period_us": 0, 00:17:34.459 "io_queue_requests": 512, 00:17:34.459 "delay_cmd_submit": true, 00:17:34.459 "transport_retry_count": 4, 00:17:34.459 "bdev_retry_count": 3, 00:17:34.459 "transport_ack_timeout": 0, 00:17:34.459 "ctrlr_loss_timeout_sec": 0, 00:17:34.459 "reconnect_delay_sec": 0, 00:17:34.459 "fast_io_fail_timeout_sec": 0, 00:17:34.459 "disable_auto_failback": false, 00:17:34.459 "generate_uuids": false, 00:17:34.459 "transport_tos": 0, 00:17:34.459 "nvme_error_stat": false, 00:17:34.459 "rdma_srq_size": 0, 00:17:34.459 "io_path_stat": false, 00:17:34.459 "allow_accel_sequence": false, 00:17:34.459 "rdma_max_cq_size": 0, 00:17:34.459 "rdma_cm_event_timeout_ms": 0, 00:17:34.459 "dhchap_digests": [ 00:17:34.459 "sha256", 00:17:34.459 "sha384", 00:17:34.459 "sha512" 00:17:34.459 ], 00:17:34.459 "dhchap_dhgroups": [ 00:17:34.459 "null", 00:17:34.459 "ffdhe2048", 00:17:34.459 "ffdhe3072", 00:17:34.459 "ffdhe4096", 00:17:34.459 "ffdhe6144", 00:17:34.459 "ffdhe8192" 00:17:34.459 ] 00:17:34.459 } 00:17:34.459 }, 00:17:34.459 { 00:17:34.459 "method": "bdev_nvme_attach_controller", 00:17:34.459 "params": { 00:17:34.459 "name": "TLSTEST", 00:17:34.459 "trtype": "TCP", 00:17:34.459 "adrfam": "IPv4", 00:17:34.459 "traddr": "10.0.0.3", 00:17:34.459 "trsvcid": "4420", 00:17:34.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.459 "prchk_reftag": false, 00:17:34.459 "prchk_guard": false, 00:17:34.459 "ctrlr_loss_timeout_sec": 0, 00:17:34.459 "reconnect_delay_sec": 0, 00:17:34.459 "fast_io_fail_timeout_sec": 0, 00:17:34.459 "psk": "key0", 00:17:34.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:34.459 "hdgst": false, 00:17:34.459 "ddgst": false 00:17:34.459 } 00:17:34.459 }, 00:17:34.459 { 00:17:34.459 "method": "bdev_nvme_set_hotplug", 00:17:34.459 "params": { 00:17:34.459 "period_us": 100000, 00:17:34.459 "enable": false 00:17:34.459 } 00:17:34.459 }, 00:17:34.459 { 00:17:34.460 "method": "bdev_wait_for_examine" 00:17:34.460 } 00:17:34.460 ] 00:17:34.460 }, 00:17:34.460 { 00:17:34.460 "subsystem": "nbd", 00:17:34.460 "config": [] 00:17:34.460 } 00:17:34.460 ] 00:17:34.460 }' 00:17:34.460 [2024-09-28 01:30:30.262161] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:34.460 [2024-09-28 01:30:30.262332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75176 ] 00:17:34.719 [2024-09-28 01:30:30.437301] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.979 [2024-09-28 01:30:30.664261] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.979 [2024-09-28 01:30:30.906767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:35.238 [2024-09-28 01:30:31.006268] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:35.497 01:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:35.497 01:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:35.497 01:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:35.497 Running I/O for 10 seconds... 00:17:45.501 3328.00 IOPS, 13.00 MiB/s 3413.50 IOPS, 13.33 MiB/s 3451.33 IOPS, 13.48 MiB/s 3458.75 IOPS, 13.51 MiB/s 3459.40 IOPS, 13.51 MiB/s 3466.33 IOPS, 13.54 MiB/s 3470.57 IOPS, 13.56 MiB/s 3472.62 IOPS, 13.56 MiB/s 3455.00 IOPS, 13.50 MiB/s 3443.40 IOPS, 13.45 MiB/s 00:17:45.501 Latency(us) 00:17:45.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.501 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:45.501 Verification LBA range: start 0x0 length 0x2000 00:17:45.501 TLSTESTn1 : 10.02 3449.43 13.47 0.00 0.00 37045.88 5510.98 28001.75 00:17:45.501 =================================================================================================================== 00:17:45.501 Total : 3449.43 13.47 0.00 0.00 37045.88 5510.98 28001.75 00:17:45.501 { 00:17:45.501 "results": [ 00:17:45.501 { 00:17:45.501 "job": "TLSTESTn1", 00:17:45.501 "core_mask": "0x4", 00:17:45.501 "workload": "verify", 00:17:45.501 "status": "finished", 00:17:45.501 "verify_range": { 00:17:45.501 "start": 0, 00:17:45.501 "length": 8192 00:17:45.501 }, 00:17:45.501 "queue_depth": 128, 00:17:45.501 "io_size": 4096, 00:17:45.501 "runtime": 10.019034, 00:17:45.501 "iops": 3449.4343466645587, 00:17:45.501 "mibps": 13.474352916658432, 00:17:45.501 "io_failed": 0, 00:17:45.501 "io_timeout": 0, 00:17:45.501 "avg_latency_us": 37045.878518518526, 00:17:45.501 "min_latency_us": 5510.981818181818, 00:17:45.501 "max_latency_us": 28001.745454545453 00:17:45.501 } 00:17:45.501 ], 00:17:45.501 "core_count": 1 00:17:45.501 } 00:17:45.501 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:45.501 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 75176 00:17:45.501 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75176 ']' 00:17:45.501 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75176 00:17:45.501 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:45.501 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:45.501 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75176 00:17:45.501 killing process with pid 75176 00:17:45.501 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.501 00:17:45.501 Latency(us) 00:17:45.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.501 =================================================================================================================== 00:17:45.501 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.501 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:45.501 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:45.501 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75176' 00:17:45.501 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75176 00:17:45.501 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75176 00:17:46.876 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 75143 00:17:46.876 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75143 ']' 00:17:46.876 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75143 00:17:46.876 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:46.876 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:46.876 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75143 00:17:46.876 killing process with pid 75143 00:17:46.876 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:46.876 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:46.876 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75143' 00:17:46.876 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75143 00:17:46.876 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75143 00:17:47.812 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:17:47.812 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:47.812 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:47.812 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.812 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=75328 00:17:47.812 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:47.812 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 75328 00:17:47.812 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75328 ']' 00:17:47.812 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.812 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:47.812 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.813 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:47.813 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.813 [2024-09-28 01:30:43.587789] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:47.813 [2024-09-28 01:30:43.588152] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.071 [2024-09-28 01:30:43.750886] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.071 [2024-09-28 01:30:43.960310] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.071 [2024-09-28 01:30:43.960389] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.071 [2024-09-28 01:30:43.960425] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.071 [2024-09-28 01:30:43.960440] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.071 [2024-09-28 01:30:43.960451] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.071 [2024-09-28 01:30:43.960521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.330 [2024-09-28 01:30:44.124861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:48.897 01:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:48.897 01:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:48.897 01:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:48.898 01:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:48.898 01:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.898 01:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.898 01:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.BKxIJCZRj7 00:17:48.898 01:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BKxIJCZRj7 00:17:48.898 01:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:48.898 [2024-09-28 01:30:44.775745] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.898 01:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:49.156 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:49.415 [2024-09-28 01:30:45.335902] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:49.415 [2024-09-28 01:30:45.336190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:49.673 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:49.673 malloc0 00:17:49.932 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:49.932 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BKxIJCZRj7 00:17:50.190 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:50.448 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:50.448 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=75388 00:17:50.448 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:50.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.707 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 75388 /var/tmp/bdevperf.sock 00:17:50.707 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75388 ']' 00:17:50.707 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.707 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:50.707 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.707 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:50.707 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.707 [2024-09-28 01:30:46.464086] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:50.707 [2024-09-28 01:30:46.464246] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75388 ] 00:17:50.707 [2024-09-28 01:30:46.625790] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.966 [2024-09-28 01:30:46.832683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.225 [2024-09-28 01:30:46.985932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:51.483 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:51.483 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:51.483 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BKxIJCZRj7 00:17:52.050 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:52.050 [2024-09-28 01:30:47.948255] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:52.309 nvme0n1 00:17:52.309 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:52.309 Running I/O for 1 seconds... 00:17:53.687 3200.00 IOPS, 12.50 MiB/s 00:17:53.687 Latency(us) 00:17:53.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.687 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:53.687 Verification LBA range: start 0x0 length 0x2000 00:17:53.687 nvme0n1 : 1.03 3222.16 12.59 0.00 0.00 39204.14 11021.96 26214.40 00:17:53.687 =================================================================================================================== 00:17:53.687 Total : 3222.16 12.59 0.00 0.00 39204.14 11021.96 26214.40 00:17:53.687 { 00:17:53.687 "results": [ 00:17:53.687 { 00:17:53.687 "job": "nvme0n1", 00:17:53.687 "core_mask": "0x2", 00:17:53.687 "workload": "verify", 00:17:53.687 "status": "finished", 00:17:53.687 "verify_range": { 00:17:53.687 "start": 0, 00:17:53.687 "length": 8192 00:17:53.687 }, 00:17:53.687 "queue_depth": 128, 00:17:53.687 "io_size": 4096, 00:17:53.687 "runtime": 1.032848, 00:17:53.687 "iops": 3222.1585363964496, 00:17:53.687 "mibps": 12.586556782798631, 00:17:53.687 "io_failed": 0, 00:17:53.687 "io_timeout": 0, 00:17:53.687 "avg_latency_us": 39204.144335664336, 00:17:53.687 "min_latency_us": 11021.963636363636, 00:17:53.687 "max_latency_us": 26214.4 00:17:53.687 } 00:17:53.687 ], 00:17:53.687 "core_count": 1 00:17:53.687 } 00:17:53.687 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 75388 00:17:53.687 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75388 ']' 00:17:53.687 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75388 00:17:53.687 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:53.687 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:53.687 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75388 00:17:53.687 killing process with pid 75388 00:17:53.687 Received shutdown signal, test time was about 1.000000 seconds 00:17:53.687 00:17:53.687 Latency(us) 00:17:53.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.687 =================================================================================================================== 00:17:53.687 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.687 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:53.687 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:53.687 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75388' 00:17:53.687 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75388 00:17:53.687 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75388 00:17:54.625 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 75328 00:17:54.625 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75328 ']' 00:17:54.625 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75328 00:17:54.625 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:54.625 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:54.625 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75328 00:17:54.625 killing process with pid 75328 00:17:54.625 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:54.625 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:54.625 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75328' 00:17:54.625 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75328 00:17:54.625 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75328 00:17:55.562 01:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:17:55.562 01:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:55.562 01:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:55.562 01:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.562 01:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=75459 00:17:55.562 01:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:55.562 01:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 75459 00:17:55.562 01:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75459 ']' 00:17:55.562 01:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.562 01:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:55.562 01:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.562 01:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:55.562 01:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.562 [2024-09-28 01:30:51.492921] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:55.562 [2024-09-28 01:30:51.493364] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.821 [2024-09-28 01:30:51.660262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.080 [2024-09-28 01:30:51.814933] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.080 [2024-09-28 01:30:51.815240] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.080 [2024-09-28 01:30:51.815276] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.080 [2024-09-28 01:30:51.815298] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.080 [2024-09-28 01:30:51.815312] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.080 [2024-09-28 01:30:51.815354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.080 [2024-09-28 01:30:51.969697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.650 [2024-09-28 01:30:52.466751] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.650 malloc0 00:17:56.650 [2024-09-28 01:30:52.528093] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:56.650 [2024-09-28 01:30:52.528407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=75491 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 75491 /var/tmp/bdevperf.sock 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75491 ']' 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:56.650 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.914 [2024-09-28 01:30:52.646043] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:56.915 [2024-09-28 01:30:52.646452] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75491 ] 00:17:56.915 [2024-09-28 01:30:52.809169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.194 [2024-09-28 01:30:53.020846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.465 [2024-09-28 01:30:53.187766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:58.033 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.033 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:58.033 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BKxIJCZRj7 00:17:58.033 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:58.292 [2024-09-28 01:30:54.187426] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.551 nvme0n1 00:17:58.551 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:58.551 Running I/O for 1 seconds... 00:17:59.929 3160.00 IOPS, 12.34 MiB/s 00:17:59.929 Latency(us) 00:17:59.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.929 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:59.929 Verification LBA range: start 0x0 length 0x2000 00:17:59.929 nvme0n1 : 1.03 3182.65 12.43 0.00 0.00 39585.12 10604.92 27048.49 00:17:59.929 =================================================================================================================== 00:17:59.929 Total : 3182.65 12.43 0.00 0.00 39585.12 10604.92 27048.49 00:17:59.929 { 00:17:59.929 "results": [ 00:17:59.929 { 00:17:59.929 "job": "nvme0n1", 00:17:59.929 "core_mask": "0x2", 00:17:59.929 "workload": "verify", 00:17:59.929 "status": "finished", 00:17:59.929 "verify_range": { 00:17:59.929 "start": 0, 00:17:59.929 "length": 8192 00:17:59.929 }, 00:17:59.929 "queue_depth": 128, 00:17:59.929 "io_size": 4096, 00:17:59.929 "runtime": 1.033415, 00:17:59.929 "iops": 3182.651693656469, 00:17:59.929 "mibps": 12.432233178345582, 00:17:59.929 "io_failed": 0, 00:17:59.929 "io_timeout": 0, 00:17:59.929 "avg_latency_us": 39585.11888111888, 00:17:59.929 "min_latency_us": 10604.916363636363, 00:17:59.929 "max_latency_us": 27048.494545454545 00:17:59.929 } 00:17:59.929 ], 00:17:59.929 "core_count": 1 00:17:59.929 } 00:17:59.929 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:17:59.929 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.929 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.930 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.930 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:17:59.930 "subsystems": [ 00:17:59.930 { 00:17:59.930 "subsystem": "keyring", 00:17:59.930 "config": [ 00:17:59.930 { 00:17:59.930 "method": "keyring_file_add_key", 00:17:59.930 "params": { 00:17:59.930 "name": "key0", 00:17:59.930 "path": "/tmp/tmp.BKxIJCZRj7" 00:17:59.930 } 00:17:59.930 } 00:17:59.930 ] 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "subsystem": "iobuf", 00:17:59.930 "config": [ 00:17:59.930 { 00:17:59.930 "method": "iobuf_set_options", 00:17:59.930 "params": { 00:17:59.930 "small_pool_count": 8192, 00:17:59.930 "large_pool_count": 1024, 00:17:59.930 "small_bufsize": 8192, 00:17:59.930 "large_bufsize": 135168 00:17:59.930 } 00:17:59.930 } 00:17:59.930 ] 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "subsystem": "sock", 00:17:59.930 "config": [ 00:17:59.930 { 00:17:59.930 "method": "sock_set_default_impl", 00:17:59.930 "params": { 00:17:59.930 "impl_name": "uring" 00:17:59.930 } 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "method": "sock_impl_set_options", 00:17:59.930 "params": { 00:17:59.930 "impl_name": "ssl", 00:17:59.930 "recv_buf_size": 4096, 00:17:59.930 "send_buf_size": 4096, 00:17:59.930 "enable_recv_pipe": true, 00:17:59.930 "enable_quickack": false, 00:17:59.930 "enable_placement_id": 0, 00:17:59.930 "enable_zerocopy_send_server": true, 00:17:59.930 "enable_zerocopy_send_client": false, 00:17:59.930 "zerocopy_threshold": 0, 00:17:59.930 "tls_version": 0, 00:17:59.930 "enable_ktls": false 00:17:59.930 } 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "method": "sock_impl_set_options", 00:17:59.930 "params": { 00:17:59.930 "impl_name": "posix", 00:17:59.930 "recv_buf_size": 2097152, 00:17:59.930 "send_buf_size": 2097152, 00:17:59.930 "enable_recv_pipe": true, 00:17:59.930 "enable_quickack": false, 00:17:59.930 "enable_placement_id": 0, 00:17:59.930 "enable_zerocopy_send_server": true, 00:17:59.930 "enable_zerocopy_send_client": false, 00:17:59.930 "zerocopy_threshold": 0, 00:17:59.930 "tls_version": 0, 00:17:59.930 "enable_ktls": false 00:17:59.930 } 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "method": "sock_impl_set_options", 00:17:59.930 "params": { 00:17:59.930 "impl_name": "uring", 00:17:59.930 "recv_buf_size": 2097152, 00:17:59.930 "send_buf_size": 2097152, 00:17:59.930 "enable_recv_pipe": true, 00:17:59.930 "enable_quickack": false, 00:17:59.930 "enable_placement_id": 0, 00:17:59.930 "enable_zerocopy_send_server": false, 00:17:59.930 "enable_zerocopy_send_client": false, 00:17:59.930 "zerocopy_threshold": 0, 00:17:59.930 "tls_version": 0, 00:17:59.930 "enable_ktls": false 00:17:59.930 } 00:17:59.930 } 00:17:59.930 ] 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "subsystem": "vmd", 00:17:59.930 "config": [] 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "subsystem": "accel", 00:17:59.930 "config": [ 00:17:59.930 { 00:17:59.930 "method": "accel_set_options", 00:17:59.930 "params": { 00:17:59.930 "small_cache_size": 128, 00:17:59.930 "large_cache_size": 16, 00:17:59.930 "task_count": 2048, 00:17:59.930 "sequence_count": 2048, 00:17:59.930 "buf_count": 2048 00:17:59.930 } 00:17:59.930 } 00:17:59.930 ] 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "subsystem": "bdev", 00:17:59.930 "config": [ 00:17:59.930 { 00:17:59.930 "method": "bdev_set_options", 00:17:59.930 "params": { 00:17:59.930 "bdev_io_pool_size": 65535, 00:17:59.930 "bdev_io_cache_size": 256, 00:17:59.930 "bdev_auto_examine": true, 00:17:59.930 "iobuf_small_cache_size": 128, 00:17:59.930 "iobuf_large_cache_size": 16 00:17:59.930 } 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "method": "bdev_raid_set_options", 00:17:59.930 "params": { 00:17:59.930 "process_window_size_kb": 1024, 00:17:59.930 "process_max_bandwidth_mb_sec": 0 00:17:59.930 } 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "method": "bdev_iscsi_set_options", 00:17:59.930 "params": { 00:17:59.930 "timeout_sec": 30 00:17:59.930 } 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "method": "bdev_nvme_set_options", 00:17:59.930 "params": { 00:17:59.930 "action_on_timeout": "none", 00:17:59.930 "timeout_us": 0, 00:17:59.930 "timeout_admin_us": 0, 00:17:59.930 "keep_alive_timeout_ms": 10000, 00:17:59.930 "arbitration_burst": 0, 00:17:59.930 "low_priority_weight": 0, 00:17:59.930 "medium_priority_weight": 0, 00:17:59.930 "high_priority_weight": 0, 00:17:59.930 "nvme_adminq_poll_period_us": 10000, 00:17:59.930 "nvme_ioq_poll_period_us": 0, 00:17:59.930 "io_queue_requests": 0, 00:17:59.930 "delay_cmd_submit": true, 00:17:59.930 "transport_retry_count": 4, 00:17:59.930 "bdev_retry_count": 3, 00:17:59.930 "transport_ack_timeout": 0, 00:17:59.930 "ctrlr_loss_timeout_sec": 0, 00:17:59.930 "reconnect_delay_sec": 0, 00:17:59.930 "fast_io_fail_timeout_sec": 0, 00:17:59.930 "disable_auto_failback": false, 00:17:59.930 "generate_uuids": false, 00:17:59.930 "transport_tos": 0, 00:17:59.930 "nvme_error_stat": false, 00:17:59.930 "rdma_srq_size": 0, 00:17:59.930 "io_path_stat": false, 00:17:59.930 "allow_accel_sequence": false, 00:17:59.930 "rdma_max_cq_size": 0, 00:17:59.930 "rdma_cm_event_timeout_ms": 0, 00:17:59.930 "dhchap_digests": [ 00:17:59.930 "sha256", 00:17:59.930 "sha384", 00:17:59.930 "sha512" 00:17:59.930 ], 00:17:59.930 "dhchap_dhgroups": [ 00:17:59.930 "null", 00:17:59.930 "ffdhe2048", 00:17:59.930 "ffdhe3072", 00:17:59.930 "ffdhe4096", 00:17:59.930 "ffdhe6144", 00:17:59.930 "ffdhe8192" 00:17:59.930 ] 00:17:59.930 } 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "method": "bdev_nvme_set_hotplug", 00:17:59.930 "params": { 00:17:59.930 "period_us": 100000, 00:17:59.930 "enable": false 00:17:59.930 } 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "method": "bdev_malloc_create", 00:17:59.930 "params": { 00:17:59.930 "name": "malloc0", 00:17:59.930 "num_blocks": 8192, 00:17:59.930 "block_size": 4096, 00:17:59.930 "physical_block_size": 4096, 00:17:59.930 "uuid": "adec9f8d-c6a1-482f-83ce-ddddfe800b8d", 00:17:59.930 "optimal_io_boundary": 0, 00:17:59.930 "md_size": 0, 00:17:59.930 "dif_type": 0, 00:17:59.930 "dif_is_head_of_md": false, 00:17:59.930 "dif_pi_format": 0 00:17:59.930 } 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "method": "bdev_wait_for_examine" 00:17:59.930 } 00:17:59.930 ] 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "subsystem": "nbd", 00:17:59.930 "config": [] 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "subsystem": "scheduler", 00:17:59.930 "config": [ 00:17:59.930 { 00:17:59.930 "method": "framework_set_scheduler", 00:17:59.930 "params": { 00:17:59.930 "name": "static" 00:17:59.930 } 00:17:59.930 } 00:17:59.930 ] 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "subsystem": "nvmf", 00:17:59.930 "config": [ 00:17:59.930 { 00:17:59.930 "method": "nvmf_set_config", 00:17:59.930 "params": { 00:17:59.930 "discovery_filter": "match_any", 00:17:59.930 "admin_cmd_passthru": { 00:17:59.930 "identify_ctrlr": false 00:17:59.930 }, 00:17:59.930 "dhchap_digests": [ 00:17:59.930 "sha256", 00:17:59.930 "sha384", 00:17:59.930 "sha512" 00:17:59.930 ], 00:17:59.930 "dhchap_dhgroups": [ 00:17:59.930 "null", 00:17:59.930 "ffdhe2048", 00:17:59.930 "ffdhe3072", 00:17:59.930 "ffdhe4096", 00:17:59.930 "ffdhe6144", 00:17:59.930 "ffdhe8192" 00:17:59.930 ] 00:17:59.930 } 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "method": "nvmf_set_max_subsystems", 00:17:59.930 "params": { 00:17:59.930 "max_subsystems": 1024 00:17:59.930 } 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "method": "nvmf_set_crdt", 00:17:59.930 "params": { 00:17:59.930 "crdt1": 0, 00:17:59.930 "crdt2": 0, 00:17:59.930 "crdt3": 0 00:17:59.930 } 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "method": "nvmf_create_transport", 00:17:59.930 "params": { 00:17:59.930 "trtype": "TCP", 00:17:59.930 "max_queue_depth": 128, 00:17:59.930 "max_io_qpairs_per_ctrlr": 127, 00:17:59.930 "in_capsule_data_size": 4096, 00:17:59.930 "max_io_size": 131072, 00:17:59.930 "io_unit_size": 131072, 00:17:59.930 "max_aq_depth": 128, 00:17:59.930 "num_shared_buffers": 511, 00:17:59.930 "buf_cache_size": 4294967295, 00:17:59.930 "dif_insert_or_strip": false, 00:17:59.930 "zcopy": false, 00:17:59.930 "c2h_success": false, 00:17:59.930 "sock_priority": 0, 00:17:59.930 "abort_timeout_sec": 1, 00:17:59.930 "ack_timeout": 0, 00:17:59.930 "data_wr_pool_size": 0 00:17:59.930 } 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "method": "nvmf_create_subsystem", 00:17:59.930 "params": { 00:17:59.930 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.930 "allow_any_host": false, 00:17:59.930 "serial_number": "00000000000000000000", 00:17:59.930 "model_number": "SPDK bdev Controller", 00:17:59.930 "max_namespaces": 32, 00:17:59.930 "min_cntlid": 1, 00:17:59.930 "max_cntlid": 65519, 00:17:59.930 "ana_reporting": false 00:17:59.930 } 00:17:59.930 }, 00:17:59.930 { 00:17:59.930 "method": "nvmf_subsystem_add_host", 00:17:59.930 "params": { 00:17:59.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.931 "host": "nqn.2016-06.io.spdk:host1", 00:17:59.931 "psk": "key0" 00:17:59.931 } 00:17:59.931 }, 00:17:59.931 { 00:17:59.931 "method": "nvmf_subsystem_add_ns", 00:17:59.931 "params": { 00:17:59.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.931 "namespace": { 00:17:59.931 "nsid": 1, 00:17:59.931 "bdev_name": "malloc0", 00:17:59.931 "nguid": "ADEC9F8DC6A1482F83CEDDDDFE800B8D", 00:17:59.931 "uuid": "adec9f8d-c6a1-482f-83ce-ddddfe800b8d", 00:17:59.931 "no_auto_visible": false 00:17:59.931 } 00:17:59.931 } 00:17:59.931 }, 00:17:59.931 { 00:17:59.931 "method": "nvmf_subsystem_add_listener", 00:17:59.931 "params": { 00:17:59.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.931 "listen_address": { 00:17:59.931 "trtype": "TCP", 00:17:59.931 "adrfam": "IPv4", 00:17:59.931 "traddr": "10.0.0.3", 00:17:59.931 "trsvcid": "4420" 00:17:59.931 }, 00:17:59.931 "secure_channel": false, 00:17:59.931 "sock_impl": "ssl" 00:17:59.931 } 00:17:59.931 } 00:17:59.931 ] 00:17:59.931 } 00:17:59.931 ] 00:17:59.931 }' 00:17:59.931 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:00.190 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:00.190 "subsystems": [ 00:18:00.190 { 00:18:00.190 "subsystem": "keyring", 00:18:00.190 "config": [ 00:18:00.190 { 00:18:00.190 "method": "keyring_file_add_key", 00:18:00.190 "params": { 00:18:00.190 "name": "key0", 00:18:00.190 "path": "/tmp/tmp.BKxIJCZRj7" 00:18:00.190 } 00:18:00.190 } 00:18:00.190 ] 00:18:00.190 }, 00:18:00.190 { 00:18:00.190 "subsystem": "iobuf", 00:18:00.190 "config": [ 00:18:00.190 { 00:18:00.190 "method": "iobuf_set_options", 00:18:00.190 "params": { 00:18:00.190 "small_pool_count": 8192, 00:18:00.190 "large_pool_count": 1024, 00:18:00.190 "small_bufsize": 8192, 00:18:00.190 "large_bufsize": 135168 00:18:00.190 } 00:18:00.190 } 00:18:00.190 ] 00:18:00.190 }, 00:18:00.190 { 00:18:00.190 "subsystem": "sock", 00:18:00.190 "config": [ 00:18:00.190 { 00:18:00.190 "method": "sock_set_default_impl", 00:18:00.190 "params": { 00:18:00.190 "impl_name": "uring" 00:18:00.190 } 00:18:00.190 }, 00:18:00.190 { 00:18:00.190 "method": "sock_impl_set_options", 00:18:00.190 "params": { 00:18:00.190 "impl_name": "ssl", 00:18:00.190 "recv_buf_size": 4096, 00:18:00.190 "send_buf_size": 4096, 00:18:00.190 "enable_recv_pipe": true, 00:18:00.191 "enable_quickack": false, 00:18:00.191 "enable_placement_id": 0, 00:18:00.191 "enable_zerocopy_send_server": true, 00:18:00.191 "enable_zerocopy_send_client": false, 00:18:00.191 "zerocopy_threshold": 0, 00:18:00.191 "tls_version": 0, 00:18:00.191 "enable_ktls": false 00:18:00.191 } 00:18:00.191 }, 00:18:00.191 { 00:18:00.191 "method": "sock_impl_set_options", 00:18:00.191 "params": { 00:18:00.191 "impl_name": "posix", 00:18:00.191 "recv_buf_size": 2097152, 00:18:00.191 "send_buf_size": 2097152, 00:18:00.191 "enable_recv_pipe": true, 00:18:00.191 "enable_quickack": false, 00:18:00.191 "enable_placement_id": 0, 00:18:00.191 "enable_zerocopy_send_server": true, 00:18:00.191 "enable_zerocopy_send_client": false, 00:18:00.191 "zerocopy_threshold": 0, 00:18:00.191 "tls_version": 0, 00:18:00.191 "enable_ktls": false 00:18:00.191 } 00:18:00.191 }, 00:18:00.191 { 00:18:00.191 "method": "sock_impl_set_options", 00:18:00.191 "params": { 00:18:00.191 "impl_name": "uring", 00:18:00.191 "recv_buf_size": 2097152, 00:18:00.191 "send_buf_size": 2097152, 00:18:00.191 "enable_recv_pipe": true, 00:18:00.191 "enable_quickack": false, 00:18:00.191 "enable_placement_id": 0, 00:18:00.191 "enable_zerocopy_send_server": false, 00:18:00.191 "enable_zerocopy_send_client": false, 00:18:00.191 "zerocopy_threshold": 0, 00:18:00.191 "tls_version": 0, 00:18:00.191 "enable_ktls": false 00:18:00.191 } 00:18:00.191 } 00:18:00.191 ] 00:18:00.191 }, 00:18:00.191 { 00:18:00.191 "subsystem": "vmd", 00:18:00.191 "config": [] 00:18:00.191 }, 00:18:00.191 { 00:18:00.191 "subsystem": "accel", 00:18:00.191 "config": [ 00:18:00.191 { 00:18:00.191 "method": "accel_set_options", 00:18:00.191 "params": { 00:18:00.191 "small_cache_size": 128, 00:18:00.191 "large_cache_size": 16, 00:18:00.191 "task_count": 2048, 00:18:00.191 "sequence_count": 2048, 00:18:00.191 "buf_count": 2048 00:18:00.191 } 00:18:00.191 } 00:18:00.191 ] 00:18:00.191 }, 00:18:00.191 { 00:18:00.191 "subsystem": "bdev", 00:18:00.191 "config": [ 00:18:00.191 { 00:18:00.191 "method": "bdev_set_options", 00:18:00.191 "params": { 00:18:00.191 "bdev_io_pool_size": 65535, 00:18:00.191 "bdev_io_cache_size": 256, 00:18:00.191 "bdev_auto_examine": true, 00:18:00.191 "iobuf_small_cache_size": 128, 00:18:00.191 "iobuf_large_cache_size": 16 00:18:00.191 } 00:18:00.191 }, 00:18:00.191 { 00:18:00.191 "method": "bdev_raid_set_options", 00:18:00.191 "params": { 00:18:00.191 "process_window_size_kb": 1024, 00:18:00.191 "process_max_bandwidth_mb_sec": 0 00:18:00.191 } 00:18:00.191 }, 00:18:00.191 { 00:18:00.191 "method": "bdev_iscsi_set_options", 00:18:00.191 "params": { 00:18:00.191 "timeout_sec": 30 00:18:00.191 } 00:18:00.191 }, 00:18:00.191 { 00:18:00.191 "method": "bdev_nvme_set_options", 00:18:00.191 "params": { 00:18:00.191 "action_on_timeout": "none", 00:18:00.191 "timeout_us": 0, 00:18:00.191 "timeout_admin_us": 0, 00:18:00.191 "keep_alive_timeout_ms": 10000, 00:18:00.191 "arbitration_burst": 0, 00:18:00.191 "low_priority_weight": 0, 00:18:00.191 "medium_priority_weight": 0, 00:18:00.191 "high_priority_weight": 0, 00:18:00.191 "nvme_adminq_poll_period_us": 10000, 00:18:00.191 "nvme_ioq_poll_period_us": 0, 00:18:00.191 "io_queue_requests": 512, 00:18:00.191 "delay_cmd_submit": true, 00:18:00.191 "transport_retry_count": 4, 00:18:00.191 "bdev_retry_count": 3, 00:18:00.191 "transport_ack_timeout": 0, 00:18:00.191 "ctrlr_loss_timeout_sec": 0, 00:18:00.191 "reconnect_delay_sec": 0, 00:18:00.191 "fast_io_fail_timeout_sec": 0, 00:18:00.191 "disable_auto_failback": false, 00:18:00.191 "generate_uuids": false, 00:18:00.191 "transport_tos": 0, 00:18:00.191 "nvme_error_stat": false, 00:18:00.191 "rdma_srq_size": 0, 00:18:00.191 "io_path_stat": false, 00:18:00.191 "allow_accel_sequence": false, 00:18:00.191 "rdma_max_cq_size": 0, 00:18:00.191 "rdma_cm_event_timeout_ms": 0, 00:18:00.191 "dhchap_digests": [ 00:18:00.191 "sha256", 00:18:00.191 "sha384", 00:18:00.191 "sha512" 00:18:00.191 ], 00:18:00.191 "dhchap_dhgroups": [ 00:18:00.191 "null", 00:18:00.191 "ffdhe2048", 00:18:00.191 "ffdhe3072", 00:18:00.191 "ffdhe4096", 00:18:00.191 "ffdhe6144", 00:18:00.191 "ffdhe8192" 00:18:00.191 ] 00:18:00.191 } 00:18:00.191 }, 00:18:00.191 { 00:18:00.191 "method": "bdev_nvme_attach_controller", 00:18:00.191 "params": { 00:18:00.191 "name": "nvme0", 00:18:00.191 "trtype": "TCP", 00:18:00.191 "adrfam": "IPv4", 00:18:00.191 "traddr": "10.0.0.3", 00:18:00.191 "trsvcid": "4420", 00:18:00.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.191 "prchk_reftag": false, 00:18:00.191 "prchk_guard": false, 00:18:00.191 "ctrlr_loss_timeout_sec": 0, 00:18:00.191 "reconnect_delay_sec": 0, 00:18:00.191 "fast_io_fail_timeout_sec": 0, 00:18:00.191 "psk": "key0", 00:18:00.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.191 "hdgst": false, 00:18:00.191 "ddgst": false 00:18:00.191 } 00:18:00.191 }, 00:18:00.191 { 00:18:00.191 "method": "bdev_nvme_set_hotplug", 00:18:00.191 "params": { 00:18:00.191 "period_us": 100000, 00:18:00.191 "enable": false 00:18:00.191 } 00:18:00.191 }, 00:18:00.191 { 00:18:00.191 "method": "bdev_enable_histogram", 00:18:00.191 "params": { 00:18:00.191 "name": "nvme0n1", 00:18:00.191 "enable": true 00:18:00.191 } 00:18:00.191 }, 00:18:00.191 { 00:18:00.191 "method": "bdev_wait_for_examine" 00:18:00.191 } 00:18:00.191 ] 00:18:00.191 }, 00:18:00.191 { 00:18:00.191 "subsystem": "nbd", 00:18:00.191 "config": [] 00:18:00.191 } 00:18:00.191 ] 00:18:00.191 }' 00:18:00.191 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 75491 00:18:00.191 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75491 ']' 00:18:00.191 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75491 00:18:00.191 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:00.191 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:00.191 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75491 00:18:00.191 killing process with pid 75491 00:18:00.191 Received shutdown signal, test time was about 1.000000 seconds 00:18:00.191 00:18:00.191 Latency(us) 00:18:00.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.191 =================================================================================================================== 00:18:00.191 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:00.191 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:00.191 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:00.191 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75491' 00:18:00.191 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75491 00:18:00.191 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75491 00:18:01.129 01:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 75459 00:18:01.129 01:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75459 ']' 00:18:01.129 01:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75459 00:18:01.129 01:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:01.129 01:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:01.129 01:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75459 00:18:01.129 killing process with pid 75459 00:18:01.129 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:01.129 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:01.129 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75459' 00:18:01.129 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75459 00:18:01.129 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75459 00:18:02.505 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:02.505 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:02.505 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:02.505 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:02.505 "subsystems": [ 00:18:02.505 { 00:18:02.505 "subsystem": "keyring", 00:18:02.505 "config": [ 00:18:02.505 { 00:18:02.505 "method": "keyring_file_add_key", 00:18:02.505 "params": { 00:18:02.505 "name": "key0", 00:18:02.505 "path": "/tmp/tmp.BKxIJCZRj7" 00:18:02.505 } 00:18:02.505 } 00:18:02.505 ] 00:18:02.505 }, 00:18:02.505 { 00:18:02.505 "subsystem": "iobuf", 00:18:02.505 "config": [ 00:18:02.505 { 00:18:02.505 "method": "iobuf_set_options", 00:18:02.505 "params": { 00:18:02.505 "small_pool_count": 8192, 00:18:02.505 "large_pool_count": 1024, 00:18:02.505 "small_bufsize": 8192, 00:18:02.505 "large_bufsize": 135168 00:18:02.505 } 00:18:02.505 } 00:18:02.505 ] 00:18:02.505 }, 00:18:02.505 { 00:18:02.505 "subsystem": "sock", 00:18:02.505 "config": [ 00:18:02.505 { 00:18:02.505 "method": "sock_set_default_impl", 00:18:02.505 "params": { 00:18:02.505 "impl_name": "uring" 00:18:02.505 } 00:18:02.505 }, 00:18:02.506 { 00:18:02.506 "method": "sock_impl_set_options", 00:18:02.506 "params": { 00:18:02.506 "impl_name": "ssl", 00:18:02.506 "recv_buf_size": 4096, 00:18:02.506 "send_buf_size": 4096, 00:18:02.506 "enable_recv_pipe": true, 00:18:02.506 "enable_quickack": false, 00:18:02.506 "enable_placement_id": 0, 00:18:02.506 "enable_zerocopy_send_server": true, 00:18:02.506 "enable_zerocopy_send_client": false, 00:18:02.506 "zerocopy_threshold": 0, 00:18:02.506 "tls_version": 0, 00:18:02.506 "enable_ktls": false 00:18:02.506 } 00:18:02.506 }, 00:18:02.506 { 00:18:02.506 "method": "sock_impl_set_options", 00:18:02.506 "params": { 00:18:02.506 "impl_name": "posix", 00:18:02.506 "recv_buf_size": 2097152, 00:18:02.506 "send_buf_size": 2097152, 00:18:02.506 "enable_recv_pipe": true, 00:18:02.506 "enable_quickack": false, 00:18:02.506 "enable_placement_id": 0, 00:18:02.506 "enable_zerocopy_send_server": true, 00:18:02.506 "enable_zerocopy_send_client": false, 00:18:02.506 "zerocopy_threshold": 0, 00:18:02.506 "tls_version": 0, 00:18:02.506 "enable_ktls": false 00:18:02.506 } 00:18:02.506 }, 00:18:02.506 { 00:18:02.506 "method": "sock_impl_set_options", 00:18:02.506 "params": { 00:18:02.506 "impl_name": "uring", 00:18:02.506 "recv_buf_size": 2097152, 00:18:02.506 "send_buf_size": 2097152, 00:18:02.506 "enable_recv_pipe": true, 00:18:02.506 "enable_quickack": false, 00:18:02.506 "enable_placement_id": 0, 00:18:02.506 "enable_zerocopy_send_server": false, 00:18:02.506 "enable_zerocopy_send_client": false, 00:18:02.506 "zerocopy_threshold": 0, 00:18:02.506 "tls_version": 0, 00:18:02.506 "enable_ktls": false 00:18:02.506 } 00:18:02.506 } 00:18:02.506 ] 00:18:02.506 }, 00:18:02.506 { 00:18:02.506 "subsystem": "vmd", 00:18:02.506 "config": [] 00:18:02.506 }, 00:18:02.506 { 00:18:02.506 "subsystem": "accel", 00:18:02.506 "config": [ 00:18:02.506 { 00:18:02.506 "method": "accel_set_options", 00:18:02.506 "params": { 00:18:02.506 "small_cache_size": 128, 00:18:02.506 "large_cache_size": 16, 00:18:02.506 "task_count": 2048, 00:18:02.506 "sequence_count": 2048, 00:18:02.506 "buf_count": 2048 00:18:02.506 } 00:18:02.506 } 00:18:02.506 ] 00:18:02.506 }, 00:18:02.506 { 00:18:02.506 "subsystem": "bdev", 00:18:02.506 "config": [ 00:18:02.506 { 00:18:02.506 "method": "bdev_set_options", 00:18:02.506 "params": { 00:18:02.506 "bdev_io_pool_size": 65535, 00:18:02.506 "bdev_io_cache_size": 256, 00:18:02.506 "bdev_auto_examine": true, 00:18:02.506 "iobuf_small_cache_size": 128, 00:18:02.506 "iobuf_large_cache_size": 16 00:18:02.506 } 00:18:02.506 }, 00:18:02.506 { 00:18:02.506 "method": "bdev_raid_set_options", 00:18:02.506 "params": { 00:18:02.506 "process_window_size_kb": 1024, 00:18:02.506 "process_max_bandwidth_mb_sec": 0 00:18:02.506 } 00:18:02.506 }, 00:18:02.506 { 00:18:02.506 "method": "bdev_iscsi_set_options", 00:18:02.506 "params": { 00:18:02.506 "timeout_sec": 30 00:18:02.506 } 00:18:02.506 }, 00:18:02.506 { 00:18:02.506 "method": "bdev_nvme_set_options", 00:18:02.506 "params": { 00:18:02.506 "action_on_timeout": "none", 00:18:02.506 "timeout_us": 0, 00:18:02.506 "timeout_admin_us": 0, 00:18:02.506 "keep_alive_timeout_ms": 10000, 00:18:02.506 "arbitration_burst": 0, 00:18:02.506 "low_priority_weight": 0, 00:18:02.506 "medium_priority_weight": 0, 00:18:02.506 "high_priority_weight": 0, 00:18:02.506 "nvme_adminq_poll_period_us": 10000, 00:18:02.506 "nvme_ioq_poll_period_us": 0, 00:18:02.506 "io_queue_requests": 0, 00:18:02.506 "delay_cmd_submit": true, 00:18:02.506 "transport_retry_count": 4, 00:18:02.506 "bdev_retry_count": 3, 00:18:02.506 "transport_ack_timeout": 0, 00:18:02.506 "ctrlr_loss_timeout_sec": 0, 00:18:02.506 "reconnect_delay_sec": 0, 00:18:02.506 "fast_io_fail_timeout_sec": 0, 00:18:02.506 "disable_auto_failback": false, 00:18:02.506 "generate_uuids": false, 00:18:02.506 "transport_tos": 0, 00:18:02.506 "nvme_error_stat": false, 00:18:02.506 "rdma_srq_size": 0, 00:18:02.506 "io_path_stat": false, 00:18:02.506 "allow_accel_sequence": false, 00:18:02.506 "rdma_max_cq_size": 0, 00:18:02.506 "rdma_cm_event_timeout_ms": 0, 00:18:02.506 "dhchap_digests": [ 00:18:02.506 "sha256", 00:18:02.506 "sha384", 00:18:02.506 "sha512" 00:18:02.506 ], 00:18:02.506 "dhchap_dhgroups": [ 00:18:02.506 "null", 00:18:02.506 "ffdhe2048", 00:18:02.506 "ffdhe3072", 00:18:02.506 "ffdhe4096", 00:18:02.506 "ffdhe6144", 00:18:02.506 "ffdhe8192" 00:18:02.506 ] 00:18:02.506 } 00:18:02.506 }, 00:18:02.506 { 00:18:02.506 "method": "bdev_nvme_set_hotplug", 00:18:02.506 "params": { 00:18:02.506 "period_us": 100000, 00:18:02.506 "enable": false 00:18:02.506 } 00:18:02.506 }, 00:18:02.506 { 00:18:02.506 "method": "bdev_malloc_create", 00:18:02.506 "params": { 00:18:02.506 "name": "malloc0", 00:18:02.506 "num_blocks": 8192, 00:18:02.506 "block_size": 4096, 00:18:02.506 "physical_block_size": 4096, 00:18:02.506 "uuid": "adec9f8d-c6a1-482f-83ce-ddddfe800b8d", 00:18:02.506 "optimal_io_boundary": 0, 00:18:02.506 "md_size": 0, 00:18:02.506 "dif_type": 0, 00:18:02.506 "dif_is_head_of_md": false, 00:18:02.506 "dif_pi_format": 0 00:18:02.506 } 00:18:02.506 }, 00:18:02.506 { 00:18:02.506 "method": "bdev_wait_for_examine" 00:18:02.506 } 00:18:02.506 ] 00:18:02.506 }, 00:18:02.506 { 00:18:02.506 "subsystem": "nbd", 00:18:02.506 "config": [] 00:18:02.506 }, 00:18:02.506 { 00:18:02.506 "subsystem": "scheduler", 00:18:02.506 "config": [ 00:18:02.506 { 00:18:02.506 "method": "framework_set_scheduler", 00:18:02.506 "params": { 00:18:02.506 "name": "static" 00:18:02.506 } 00:18:02.506 } 00:18:02.506 ] 00:18:02.506 }, 00:18:02.506 { 00:18:02.506 "subsystem": "nvmf", 00:18:02.506 "config": [ 00:18:02.506 { 00:18:02.506 "method": "nvmf_set_config", 00:18:02.506 "params": { 00:18:02.506 "discovery_filter": "match_any", 00:18:02.506 "admin_cmd_passthru": { 00:18:02.506 "identify_ctrlr": false 00:18:02.506 }, 00:18:02.506 "dhchap_digests": [ 00:18:02.506 "sha256", 00:18:02.506 "sha384", 00:18:02.506 "sha512" 00:18:02.506 ], 00:18:02.506 "dhchap_dhgroups": [ 00:18:02.506 "null", 00:18:02.506 "ffdhe2048", 00:18:02.506 "ffdhe3072", 00:18:02.506 "ffdhe4096", 00:18:02.506 "ffdhe6144", 00:18:02.506 "ffdhe8192" 00:18:02.506 ] 00:18:02.506 } 00:18:02.506 }, 00:18:02.506 { 00:18:02.506 "method": "nvmf_set_max_subsystems", 00:18:02.506 "params": { 00:18:02.506 "max_subsystems": 1024 00:18:02.506 } 00:18:02.506 }, 00:18:02.506 { 00:18:02.506 "method": "nvmf_set_crdt", 00:18:02.506 "params": { 00:18:02.506 "crdt1": 0, 00:18:02.506 "crdt2": 0, 00:18:02.506 "crdt3": 0 00:18:02.506 } 00:18:02.506 }, 00:18:02.506 { 00:18:02.506 "method": "nvmf_create_transport", 00:18:02.506 "params": { 00:18:02.506 "trtype": "TCP", 00:18:02.506 "max_queue_depth": 128, 00:18:02.507 "max_io_qpairs_per_ctrlr": 127, 00:18:02.507 "in_capsule_data_size": 4096, 00:18:02.507 "max_io_size": 131072, 00:18:02.507 "io_unit_size": 131072, 00:18:02.507 "max_aq_depth": 128, 00:18:02.507 "num_shared_buffers": 511, 00:18:02.507 "buf_cache_size": 4294967295, 00:18:02.507 "dif_insert_or_strip": false, 00:18:02.507 "zcopy": false, 00:18:02.507 "c2h_success": false, 00:18:02.507 "sock_priority": 0, 00:18:02.507 "abort_timeout_sec": 1, 00:18:02.507 "ack_timeout": 0, 00:18:02.507 "data_wr_pool_size": 0 00:18:02.507 } 00:18:02.507 }, 00:18:02.507 { 00:18:02.507 "method": "nvmf_create_subsystem", 00:18:02.507 "params": { 00:18:02.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.507 "allow_any_host": false, 00:18:02.507 "serial_number": "00000000000000000000", 00:18:02.507 "model_number": "SPDK bdev Controller", 00:18:02.507 "max_namespaces": 32, 00:18:02.507 "min_cntlid": 1, 00:18:02.507 "max_cntlid": 65519, 00:18:02.507 "ana_reporting": false 00:18:02.507 } 00:18:02.507 }, 00:18:02.507 { 00:18:02.507 "method": "nvmf_subsystem_add_host", 00:18:02.507 "params": { 00:18:02.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.507 "host": "nqn.2016-06.io.spdk:host1", 00:18:02.507 "psk": "key0" 00:18:02.507 } 00:18:02.507 }, 00:18:02.507 { 00:18:02.507 "method": "nvmf_subsystem_add_ns", 00:18:02.507 "params": { 00:18:02.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.507 "namespace": { 00:18:02.507 "nsid": 1, 00:18:02.507 "bdev_name": "malloc0", 00:18:02.507 "nguid": "ADEC9F8DC6A1482F83CEDDDDFE800B8D", 00:18:02.507 "uuid": "adec9f8d-c6a1-482f-83ce-ddddfe800b8d", 00:18:02.507 "no_auto_visible": false 00:18:02.507 } 00:18:02.507 } 00:18:02.507 }, 00:18:02.507 { 00:18:02.507 "method": "nvmf_subsystem_add_listener", 00:18:02.507 "params": { 00:18:02.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.507 "listen_address": { 00:18:02.507 "trtype": "TCP", 00:18:02.507 "adrfam": "IPv4", 00:18:02.507 "traddr": "10.0.0.3", 00:18:02.507 "trsvcid": "4420" 00:18:02.507 }, 00:18:02.507 "secure_channel": false, 00:18:02.507 "sock_impl": "ssl" 00:18:02.507 } 00:18:02.507 } 00:18:02.507 ] 00:18:02.507 } 00:18:02.507 ] 00:18:02.507 }' 00:18:02.507 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.507 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=75565 00:18:02.507 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:02.507 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 75565 00:18:02.507 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75565 ']' 00:18:02.507 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.507 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:02.507 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.507 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:02.507 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.507 [2024-09-28 01:30:58.202494] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:02.507 [2024-09-28 01:30:58.203050] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.507 [2024-09-28 01:30:58.377814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.766 [2024-09-28 01:30:58.538881] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.766 [2024-09-28 01:30:58.539268] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.766 [2024-09-28 01:30:58.539304] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.766 [2024-09-28 01:30:58.539323] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.766 [2024-09-28 01:30:58.539337] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.766 [2024-09-28 01:30:58.539517] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.024 [2024-09-28 01:30:58.802618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:03.284 [2024-09-28 01:30:58.962376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.284 [2024-09-28 01:30:58.994347] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:03.284 [2024-09-28 01:30:58.994679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:03.284 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:03.284 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:03.284 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:03.284 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:03.284 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.284 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.284 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=75597 00:18:03.284 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 75597 /var/tmp/bdevperf.sock 00:18:03.284 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75597 ']' 00:18:03.284 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:03.284 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.284 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:03.284 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:03.284 "subsystems": [ 00:18:03.284 { 00:18:03.284 "subsystem": "keyring", 00:18:03.284 "config": [ 00:18:03.284 { 00:18:03.285 "method": "keyring_file_add_key", 00:18:03.285 "params": { 00:18:03.285 "name": "key0", 00:18:03.285 "path": "/tmp/tmp.BKxIJCZRj7" 00:18:03.285 } 00:18:03.285 } 00:18:03.285 ] 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "subsystem": "iobuf", 00:18:03.285 "config": [ 00:18:03.285 { 00:18:03.285 "method": "iobuf_set_options", 00:18:03.285 "params": { 00:18:03.285 "small_pool_count": 8192, 00:18:03.285 "large_pool_count": 1024, 00:18:03.285 "small_bufsize": 8192, 00:18:03.285 "large_bufsize": 135168 00:18:03.285 } 00:18:03.285 } 00:18:03.285 ] 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "subsystem": "sock", 00:18:03.285 "config": [ 00:18:03.285 { 00:18:03.285 "method": "sock_set_default_impl", 00:18:03.285 "params": { 00:18:03.285 "impl_name": "uring" 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "sock_impl_set_options", 00:18:03.285 "params": { 00:18:03.285 "impl_name": "ssl", 00:18:03.285 "recv_buf_size": 4096, 00:18:03.285 "send_buf_size": 4096, 00:18:03.285 "enable_recv_pipe": true, 00:18:03.285 "enable_quickack": false, 00:18:03.285 "enable_placement_id": 0, 00:18:03.285 "enable_zerocopy_send_server": true, 00:18:03.285 "enable_zerocopy_send_client": false, 00:18:03.285 "zerocopy_threshold": 0, 00:18:03.285 "tls_version": 0, 00:18:03.285 "enable_ktls": false 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "sock_impl_set_options", 00:18:03.285 "params": { 00:18:03.285 "impl_name": "posix", 00:18:03.285 "recv_buf_size": 2097152, 00:18:03.285 "send_buf_size": 2097152, 00:18:03.285 "enable_recv_pipe": true, 00:18:03.285 "enable_quickack": false, 00:18:03.285 "enable_placement_id": 0, 00:18:03.285 "enable_zerocopy_send_server": true, 00:18:03.285 "enable_zerocopy_send_client": false, 00:18:03.285 "zerocopy_threshold": 0, 00:18:03.285 "tls_version": 0, 00:18:03.285 "enable_ktls": false 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "sock_impl_set_options", 00:18:03.285 "params": { 00:18:03.285 "impl_name": "uring", 00:18:03.285 "recv_buf_size": 2097152, 00:18:03.285 "send_buf_size": 2097152, 00:18:03.285 "enable_recv_pipe": true, 00:18:03.285 "enable_quickack": false, 00:18:03.285 "enable_placement_id": 0, 00:18:03.285 "enable_zerocopy_send_server": false, 00:18:03.285 "enable_zerocopy_send_client": false, 00:18:03.285 "zerocopy_threshold": 0, 00:18:03.285 "tls_version": 0, 00:18:03.285 "enable_ktls": false 00:18:03.285 } 00:18:03.285 } 00:18:03.285 ] 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "subsystem": "vmd", 00:18:03.285 "config": [] 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "subsystem": "accel", 00:18:03.285 "config": [ 00:18:03.285 { 00:18:03.285 "method": "accel_set_options", 00:18:03.285 "params": { 00:18:03.285 "small_cache_size": 128, 00:18:03.285 "large_cache_size": 16, 00:18:03.285 "task_count": 2048, 00:18:03.285 "sequence_count": 2048, 00:18:03.285 "buf_count": 2048 00:18:03.285 } 00:18:03.285 } 00:18:03.285 ] 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "subsystem": "bdev", 00:18:03.285 "config": [ 00:18:03.285 { 00:18:03.285 "method": "bdev_set_options", 00:18:03.285 "params": { 00:18:03.285 "bdev_io_pool_size": 65535, 00:18:03.285 "bdev_io_cache_size": 256, 00:18:03.285 "bdev_auto_examine": true, 00:18:03.285 "iobuf_small_cache_size": 128, 00:18:03.285 "iobuf_large_cache_size": 16 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "bdev_raid_set_options", 00:18:03.285 "params": { 00:18:03.285 "process_window_size_kb": 1024, 00:18:03.285 "process_max_bandwidth_mb_sec": 0 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "bdev_iscsi_set_options", 00:18:03.285 "params": { 00:18:03.285 "timeout_sec": 30 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "bdev_nvme_set_options", 00:18:03.285 "params": { 00:18:03.285 "action_on_timeout": "none", 00:18:03.285 "timeout_us": 0, 00:18:03.285 "timeout_admin_us": 0, 00:18:03.285 "keep_alive_timeout_ms": 10000, 00:18:03.285 "arbitration_burst": 0, 00:18:03.285 "low_priority_weight": 0, 00:18:03.285 "medium_priority_weight": 0, 00:18:03.285 "high_priority_weight": 0, 00:18:03.285 "nvme_adminq_poll_period_us": 10000, 00:18:03.285 "nvme_ioq_poll_period_us": 0, 00:18:03.285 "io_queue_requests": 512, 00:18:03.285 "delay_cmd_submit": true, 00:18:03.285 "transport_retry_count": 4, 00:18:03.285 "bdev_retry_count": 3, 00:18:03.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.285 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.285 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:03.285 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.285 "transport_ack_timeout": 0, 00:18:03.285 "ctrlr_loss_timeout_sec": 0, 00:18:03.285 "reconnect_delay_sec": 0, 00:18:03.285 "fast_io_fail_timeout_sec": 0, 00:18:03.285 "disable_auto_failback": false, 00:18:03.285 "generate_uuids": false, 00:18:03.285 "transport_tos": 0, 00:18:03.285 "nvme_error_stat": false, 00:18:03.285 "rdma_srq_size": 0, 00:18:03.285 "io_path_stat": false, 00:18:03.285 "allow_accel_sequence": false, 00:18:03.285 "rdma_max_cq_size": 0, 00:18:03.285 "rdma_cm_event_timeout_ms": 0, 00:18:03.285 "dhchap_digests": [ 00:18:03.285 "sha256", 00:18:03.285 "sha384", 00:18:03.285 "sha512" 00:18:03.285 ], 00:18:03.285 "dhchap_dhgroups": [ 00:18:03.285 "null", 00:18:03.285 "ffdhe2048", 00:18:03.285 "ffdhe3072", 00:18:03.285 "ffdhe4096", 00:18:03.285 "ffdhe6144", 00:18:03.285 "ffdhe8192" 00:18:03.285 ] 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "bdev_nvme_attach_controller", 00:18:03.285 "params": { 00:18:03.285 "name": "nvme0", 00:18:03.285 "trtype": "TCP", 00:18:03.285 "adrfam": "IPv4", 00:18:03.285 "traddr": "10.0.0.3", 00:18:03.285 "trsvcid": "4420", 00:18:03.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.285 "prchk_reftag": false, 00:18:03.285 "prchk_guard": false, 00:18:03.285 "ctrlr_loss_timeout_sec": 0, 00:18:03.285 "reconnect_delay_sec": 0, 00:18:03.285 "fast_io_fail_timeout_sec": 0, 00:18:03.285 "psk": "key0", 00:18:03.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.286 "hdgst": false, 00:18:03.286 "ddgst": false 00:18:03.286 } 00:18:03.286 }, 00:18:03.286 { 00:18:03.286 "method": "bdev_nvme_set_hotplug", 00:18:03.286 "params": { 00:18:03.286 "period_us": 100000, 00:18:03.286 "enable": false 00:18:03.286 } 00:18:03.286 }, 00:18:03.286 { 00:18:03.286 "method": "bdev_enable_histogram", 00:18:03.286 "params": { 00:18:03.286 "name": "nvme0n1", 00:18:03.286 "enable": true 00:18:03.286 } 00:18:03.286 }, 00:18:03.286 { 00:18:03.286 "method": "bdev_wait_for_examine" 00:18:03.286 } 00:18:03.286 ] 00:18:03.286 }, 00:18:03.286 { 00:18:03.286 "subsystem": "nbd", 00:18:03.286 "config": [] 00:18:03.286 } 00:18:03.286 ] 00:18:03.286 }' 00:18:03.544 [2024-09-28 01:30:59.226521] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:03.544 [2024-09-28 01:30:59.226919] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75597 ] 00:18:03.544 [2024-09-28 01:30:59.401761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.802 [2024-09-28 01:30:59.626581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.061 [2024-09-28 01:30:59.863366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:04.061 [2024-09-28 01:30:59.962819] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.321 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:04.321 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:04.321 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:04.321 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:04.580 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.580 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:04.839 Running I/O for 1 seconds... 00:18:05.774 3139.00 IOPS, 12.26 MiB/s 00:18:05.774 Latency(us) 00:18:05.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.774 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:05.774 Verification LBA range: start 0x0 length 0x2000 00:18:05.774 nvme0n1 : 1.02 3198.46 12.49 0.00 0.00 39533.44 7983.48 39798.23 00:18:05.774 =================================================================================================================== 00:18:05.774 Total : 3198.46 12.49 0.00 0.00 39533.44 7983.48 39798.23 00:18:05.774 { 00:18:05.774 "results": [ 00:18:05.774 { 00:18:05.774 "job": "nvme0n1", 00:18:05.774 "core_mask": "0x2", 00:18:05.774 "workload": "verify", 00:18:05.774 "status": "finished", 00:18:05.774 "verify_range": { 00:18:05.774 "start": 0, 00:18:05.774 "length": 8192 00:18:05.774 }, 00:18:05.774 "queue_depth": 128, 00:18:05.774 "io_size": 4096, 00:18:05.774 "runtime": 1.021429, 00:18:05.774 "iops": 3198.460196450267, 00:18:05.774 "mibps": 12.493985142383856, 00:18:05.774 "io_failed": 0, 00:18:05.774 "io_timeout": 0, 00:18:05.774 "avg_latency_us": 39533.44265353257, 00:18:05.774 "min_latency_us": 7983.476363636363, 00:18:05.774 "max_latency_us": 39798.225454545456 00:18:05.774 } 00:18:05.774 ], 00:18:05.774 "core_count": 1 00:18:05.774 } 00:18:05.774 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:05.774 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:05.774 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:05.774 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:18:05.774 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:18:05.774 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:05.774 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:05.774 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:05.774 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:05.775 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:05.775 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:05.775 nvmf_trace.0 00:18:06.033 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:18:06.033 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 75597 00:18:06.033 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75597 ']' 00:18:06.033 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75597 00:18:06.033 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:06.033 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:06.033 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75597 00:18:06.033 killing process with pid 75597 00:18:06.033 Received shutdown signal, test time was about 1.000000 seconds 00:18:06.033 00:18:06.033 Latency(us) 00:18:06.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.033 =================================================================================================================== 00:18:06.033 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:06.033 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:06.033 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:06.033 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75597' 00:18:06.033 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75597 00:18:06.033 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75597 00:18:06.969 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:06.969 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:06.969 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:06.969 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:06.969 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:06.969 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:06.969 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:06.969 rmmod nvme_tcp 00:18:06.969 rmmod nvme_fabrics 00:18:06.970 rmmod nvme_keyring 00:18:06.970 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:06.970 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:06.970 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:06.970 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 75565 ']' 00:18:06.970 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 75565 00:18:06.970 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75565 ']' 00:18:06.970 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75565 00:18:06.970 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:06.970 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:06.970 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75565 00:18:06.970 killing process with pid 75565 00:18:06.970 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:06.970 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:06.970 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75565' 00:18:06.970 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75565 00:18:06.970 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75565 00:18:08.345 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:08.345 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:08.345 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:08.345 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:08.345 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:18:08.345 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:08.345 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:18:08.346 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:08.346 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:08.346 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:08.346 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:08.346 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:08.346 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.sjHXYMivbW /tmp/tmp.PZvu5HHsDc /tmp/tmp.BKxIJCZRj7 00:18:08.346 00:18:08.346 real 1m50.605s 00:18:08.346 user 3m3.752s 00:18:08.346 sys 0m26.852s 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.346 ************************************ 00:18:08.346 END TEST nvmf_tls 00:18:08.346 ************************************ 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:08.346 ************************************ 00:18:08.346 START TEST nvmf_fips 00:18:08.346 ************************************ 00:18:08.346 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:08.605 * Looking for test storage... 00:18:08.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.605 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:08.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.606 --rc genhtml_branch_coverage=1 00:18:08.606 --rc genhtml_function_coverage=1 00:18:08.606 --rc genhtml_legend=1 00:18:08.606 --rc geninfo_all_blocks=1 00:18:08.606 --rc geninfo_unexecuted_blocks=1 00:18:08.606 00:18:08.606 ' 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:08.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.606 --rc genhtml_branch_coverage=1 00:18:08.606 --rc genhtml_function_coverage=1 00:18:08.606 --rc genhtml_legend=1 00:18:08.606 --rc geninfo_all_blocks=1 00:18:08.606 --rc geninfo_unexecuted_blocks=1 00:18:08.606 00:18:08.606 ' 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:08.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.606 --rc genhtml_branch_coverage=1 00:18:08.606 --rc genhtml_function_coverage=1 00:18:08.606 --rc genhtml_legend=1 00:18:08.606 --rc geninfo_all_blocks=1 00:18:08.606 --rc geninfo_unexecuted_blocks=1 00:18:08.606 00:18:08.606 ' 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:08.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.606 --rc genhtml_branch_coverage=1 00:18:08.606 --rc genhtml_function_coverage=1 00:18:08.606 --rc genhtml_legend=1 00:18:08.606 --rc geninfo_all_blocks=1 00:18:08.606 --rc geninfo_unexecuted_blocks=1 00:18:08.606 00:18:08.606 ' 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:08.606 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:08.606 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:08.607 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:08.866 Error setting digest 00:18:08.866 40A2634FE97F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:08.866 40A2634FE97F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:08.866 Cannot find device "nvmf_init_br" 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:08.866 Cannot find device "nvmf_init_br2" 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:08.866 Cannot find device "nvmf_tgt_br" 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:08.866 Cannot find device "nvmf_tgt_br2" 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:08.866 Cannot find device "nvmf_init_br" 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:08.866 Cannot find device "nvmf_init_br2" 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:08.866 Cannot find device "nvmf_tgt_br" 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:08.866 Cannot find device "nvmf_tgt_br2" 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:08.866 Cannot find device "nvmf_br" 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:08.866 Cannot find device "nvmf_init_if" 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:08.866 Cannot find device "nvmf_init_if2" 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:08.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:08.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:08.866 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:08.867 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:09.125 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:09.126 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:09.126 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:18:09.126 00:18:09.126 --- 10.0.0.3 ping statistics --- 00:18:09.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.126 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:09.126 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:09.126 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:18:09.126 00:18:09.126 --- 10.0.0.4 ping statistics --- 00:18:09.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.126 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:09.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:09.126 00:18:09.126 --- 10.0.0.1 ping statistics --- 00:18:09.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.126 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:09.126 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:09.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:18:09.126 00:18:09.126 --- 10.0.0.2 ping statistics --- 00:18:09.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.126 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # return 0 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=75937 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 75937 00:18:09.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 75937 ']' 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:09.126 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:09.384 [2024-09-28 01:31:05.154390] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:09.384 [2024-09-28 01:31:05.154799] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.643 [2024-09-28 01:31:05.318102] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.643 [2024-09-28 01:31:05.548978] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.643 [2024-09-28 01:31:05.549223] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.643 [2024-09-28 01:31:05.549439] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.643 [2024-09-28 01:31:05.549624] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.643 [2024-09-28 01:31:05.549656] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.643 [2024-09-28 01:31:05.549708] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.901 [2024-09-28 01:31:05.730181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:10.465 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:10.465 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:18:10.465 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:10.465 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:10.465 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:10.465 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.465 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:10.465 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:10.465 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:10.465 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.7jz 00:18:10.465 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:10.465 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.7jz 00:18:10.465 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.7jz 00:18:10.465 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.7jz 00:18:10.465 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:10.723 [2024-09-28 01:31:06.432913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.723 [2024-09-28 01:31:06.448839] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:10.723 [2024-09-28 01:31:06.449117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:10.723 malloc0 00:18:10.723 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:10.723 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=75983 00:18:10.723 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:10.723 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 75983 /var/tmp/bdevperf.sock 00:18:10.723 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 75983 ']' 00:18:10.723 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.723 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:10.723 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.723 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:10.723 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:10.981 [2024-09-28 01:31:06.690330] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:10.981 [2024-09-28 01:31:06.690551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75983 ] 00:18:10.981 [2024-09-28 01:31:06.856863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.238 [2024-09-28 01:31:07.027175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.495 [2024-09-28 01:31:07.181243] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:11.753 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:11.753 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:18:11.753 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.7jz 00:18:12.012 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:12.270 [2024-09-28 01:31:08.090659] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.270 TLSTESTn1 00:18:12.270 01:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:12.527 Running I/O for 10 seconds... 00:18:22.809 2997.00 IOPS, 11.71 MiB/s 3072.00 IOPS, 12.00 MiB/s 3113.33 IOPS, 12.16 MiB/s 3118.75 IOPS, 12.18 MiB/s 3123.20 IOPS, 12.20 MiB/s 3114.67 IOPS, 12.17 MiB/s 3122.29 IOPS, 12.20 MiB/s 3120.75 IOPS, 12.19 MiB/s 3129.67 IOPS, 12.23 MiB/s 3150.00 IOPS, 12.30 MiB/s 00:18:22.809 Latency(us) 00:18:22.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.809 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:22.809 Verification LBA range: start 0x0 length 0x2000 00:18:22.809 TLSTESTn1 : 10.02 3155.27 12.33 0.00 0.00 40494.19 8400.52 29550.78 00:18:22.809 =================================================================================================================== 00:18:22.809 Total : 3155.27 12.33 0.00 0.00 40494.19 8400.52 29550.78 00:18:22.809 { 00:18:22.809 "results": [ 00:18:22.809 { 00:18:22.809 "job": "TLSTESTn1", 00:18:22.809 "core_mask": "0x4", 00:18:22.809 "workload": "verify", 00:18:22.809 "status": "finished", 00:18:22.809 "verify_range": { 00:18:22.809 "start": 0, 00:18:22.809 "length": 8192 00:18:22.809 }, 00:18:22.809 "queue_depth": 128, 00:18:22.809 "io_size": 4096, 00:18:22.809 "runtime": 10.023876, 00:18:22.809 "iops": 3155.2664857386503, 00:18:22.809 "mibps": 12.325259709916603, 00:18:22.809 "io_failed": 0, 00:18:22.809 "io_timeout": 0, 00:18:22.809 "avg_latency_us": 40494.18801936144, 00:18:22.809 "min_latency_us": 8400.523636363636, 00:18:22.809 "max_latency_us": 29550.778181818183 00:18:22.809 } 00:18:22.809 ], 00:18:22.809 "core_count": 1 00:18:22.809 } 00:18:22.809 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:22.809 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:22.809 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:18:22.809 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:18:22.809 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:22.809 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:22.809 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:22.809 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:22.809 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:22.809 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:22.809 nvmf_trace.0 00:18:22.809 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:18:22.809 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 75983 00:18:22.809 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 75983 ']' 00:18:22.809 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 75983 00:18:22.809 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:18:22.809 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:22.809 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75983 00:18:22.809 killing process with pid 75983 00:18:22.809 Received shutdown signal, test time was about 10.000000 seconds 00:18:22.809 00:18:22.809 Latency(us) 00:18:22.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.809 =================================================================================================================== 00:18:22.810 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.810 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:22.810 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:22.810 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75983' 00:18:22.810 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 75983 00:18:22.810 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 75983 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:23.747 rmmod nvme_tcp 00:18:23.747 rmmod nvme_fabrics 00:18:23.747 rmmod nvme_keyring 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 75937 ']' 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 75937 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 75937 ']' 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 75937 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75937 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75937' 00:18:23.747 killing process with pid 75937 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 75937 00:18:23.747 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 75937 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.7jz 00:18:25.127 ************************************ 00:18:25.127 END TEST nvmf_fips 00:18:25.127 ************************************ 00:18:25.127 00:18:25.127 real 0m16.754s 00:18:25.127 user 0m24.110s 00:18:25.127 sys 0m5.431s 00:18:25.127 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:25.127 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:25.127 01:31:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:25.127 01:31:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:25.127 01:31:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:25.127 01:31:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:25.127 ************************************ 00:18:25.127 START TEST nvmf_control_msg_list 00:18:25.127 ************************************ 00:18:25.127 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:25.387 * Looking for test storage... 00:18:25.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:25.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.387 --rc genhtml_branch_coverage=1 00:18:25.387 --rc genhtml_function_coverage=1 00:18:25.387 --rc genhtml_legend=1 00:18:25.387 --rc geninfo_all_blocks=1 00:18:25.387 --rc geninfo_unexecuted_blocks=1 00:18:25.387 00:18:25.387 ' 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:25.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.387 --rc genhtml_branch_coverage=1 00:18:25.387 --rc genhtml_function_coverage=1 00:18:25.387 --rc genhtml_legend=1 00:18:25.387 --rc geninfo_all_blocks=1 00:18:25.387 --rc geninfo_unexecuted_blocks=1 00:18:25.387 00:18:25.387 ' 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:25.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.387 --rc genhtml_branch_coverage=1 00:18:25.387 --rc genhtml_function_coverage=1 00:18:25.387 --rc genhtml_legend=1 00:18:25.387 --rc geninfo_all_blocks=1 00:18:25.387 --rc geninfo_unexecuted_blocks=1 00:18:25.387 00:18:25.387 ' 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:25.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.387 --rc genhtml_branch_coverage=1 00:18:25.387 --rc genhtml_function_coverage=1 00:18:25.387 --rc genhtml_legend=1 00:18:25.387 --rc geninfo_all_blocks=1 00:18:25.387 --rc geninfo_unexecuted_blocks=1 00:18:25.387 00:18:25.387 ' 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.387 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:25.388 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:25.388 Cannot find device "nvmf_init_br" 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:25.388 Cannot find device "nvmf_init_br2" 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:25.388 Cannot find device "nvmf_tgt_br" 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:25.388 Cannot find device "nvmf_tgt_br2" 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:18:25.388 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:25.647 Cannot find device "nvmf_init_br" 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:25.647 Cannot find device "nvmf_init_br2" 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:25.647 Cannot find device "nvmf_tgt_br" 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:25.647 Cannot find device "nvmf_tgt_br2" 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:25.647 Cannot find device "nvmf_br" 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:25.647 Cannot find device "nvmf_init_if" 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:25.647 Cannot find device "nvmf_init_if2" 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:25.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:25.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:25.647 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:25.906 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:25.906 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:25.906 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:25.906 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:25.906 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:25.906 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:25.906 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:25.906 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:25.906 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:25.906 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:25.906 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:25.906 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:25.906 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:18:25.906 00:18:25.906 --- 10.0.0.3 ping statistics --- 00:18:25.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.906 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:25.906 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:25.907 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:25.907 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:18:25.907 00:18:25.907 --- 10.0.0.4 ping statistics --- 00:18:25.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.907 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:25.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:25.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:18:25.907 00:18:25.907 --- 10.0.0.1 ping statistics --- 00:18:25.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.907 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:25.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:25.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:18:25.907 00:18:25.907 --- 10.0.0.2 ping statistics --- 00:18:25.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.907 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # return 0 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=76378 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 76378 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 76378 ']' 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:25.907 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:25.907 [2024-09-28 01:31:21.832274] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:25.907 [2024-09-28 01:31:21.832754] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.165 [2024-09-28 01:31:22.009963] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.423 [2024-09-28 01:31:22.240942] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.423 [2024-09-28 01:31:22.241020] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.423 [2024-09-28 01:31:22.241058] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.423 [2024-09-28 01:31:22.241080] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.423 [2024-09-28 01:31:22.241097] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.423 [2024-09-28 01:31:22.241153] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.683 [2024-09-28 01:31:22.415317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:26.942 [2024-09-28 01:31:22.830189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.942 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:27.202 Malloc0 00:18:27.202 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.202 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:27.202 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.202 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:27.202 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.202 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:27.202 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.202 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:27.202 [2024-09-28 01:31:22.898199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:27.202 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.202 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=76416 00:18:27.202 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:27.202 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=76417 00:18:27.202 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:27.202 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=76418 00:18:27.202 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:27.202 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 76416 00:18:27.202 [2024-09-28 01:31:23.112870] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:27.202 [2024-09-28 01:31:23.133443] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:27.461 [2024-09-28 01:31:23.143521] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:28.396 Initializing NVMe Controllers 00:18:28.396 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:28.396 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:18:28.396 Initialization complete. Launching workers. 00:18:28.396 ======================================================== 00:18:28.396 Latency(us) 00:18:28.396 Device Information : IOPS MiB/s Average min max 00:18:28.396 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2673.00 10.44 373.55 172.06 1455.26 00:18:28.396 ======================================================== 00:18:28.396 Total : 2673.00 10.44 373.55 172.06 1455.26 00:18:28.396 00:18:28.396 Initializing NVMe Controllers 00:18:28.397 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:28.397 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:18:28.397 Initialization complete. Launching workers. 00:18:28.397 ======================================================== 00:18:28.397 Latency(us) 00:18:28.397 Device Information : IOPS MiB/s Average min max 00:18:28.397 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2678.88 10.46 372.69 234.65 900.87 00:18:28.397 ======================================================== 00:18:28.397 Total : 2678.88 10.46 372.69 234.65 900.87 00:18:28.397 00:18:28.397 Initializing NVMe Controllers 00:18:28.397 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:28.397 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:18:28.397 Initialization complete. Launching workers. 00:18:28.397 ======================================================== 00:18:28.397 Latency(us) 00:18:28.397 Device Information : IOPS MiB/s Average min max 00:18:28.397 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2698.94 10.54 369.95 155.43 773.27 00:18:28.397 ======================================================== 00:18:28.397 Total : 2698.94 10.54 369.95 155.43 773.27 00:18:28.397 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 76417 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 76418 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:28.397 rmmod nvme_tcp 00:18:28.397 rmmod nvme_fabrics 00:18:28.397 rmmod nvme_keyring 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 76378 ']' 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 76378 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 76378 ']' 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 76378 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:28.397 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76378 00:18:28.656 killing process with pid 76378 00:18:28.656 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:28.656 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:28.656 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76378' 00:18:28.656 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 76378 00:18:28.656 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 76378 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:29.591 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:29.851 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:29.851 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:29.851 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:29.851 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.851 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:29.851 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.851 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:18:29.851 00:18:29.851 real 0m4.555s 00:18:29.851 user 0m6.725s 00:18:29.851 sys 0m1.454s 00:18:29.851 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:29.851 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:29.851 ************************************ 00:18:29.851 END TEST nvmf_control_msg_list 00:18:29.851 ************************************ 00:18:29.851 01:31:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:29.851 01:31:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:29.851 01:31:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:29.851 01:31:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:29.851 ************************************ 00:18:29.851 START TEST nvmf_wait_for_buf 00:18:29.851 ************************************ 00:18:29.851 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:29.851 * Looking for test storage... 00:18:29.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:29.851 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:29.851 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:18:29.851 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:30.159 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:30.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.160 --rc genhtml_branch_coverage=1 00:18:30.160 --rc genhtml_function_coverage=1 00:18:30.160 --rc genhtml_legend=1 00:18:30.160 --rc geninfo_all_blocks=1 00:18:30.160 --rc geninfo_unexecuted_blocks=1 00:18:30.160 00:18:30.160 ' 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:30.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.160 --rc genhtml_branch_coverage=1 00:18:30.160 --rc genhtml_function_coverage=1 00:18:30.160 --rc genhtml_legend=1 00:18:30.160 --rc geninfo_all_blocks=1 00:18:30.160 --rc geninfo_unexecuted_blocks=1 00:18:30.160 00:18:30.160 ' 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:30.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.160 --rc genhtml_branch_coverage=1 00:18:30.160 --rc genhtml_function_coverage=1 00:18:30.160 --rc genhtml_legend=1 00:18:30.160 --rc geninfo_all_blocks=1 00:18:30.160 --rc geninfo_unexecuted_blocks=1 00:18:30.160 00:18:30.160 ' 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:30.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.160 --rc genhtml_branch_coverage=1 00:18:30.160 --rc genhtml_function_coverage=1 00:18:30.160 --rc genhtml_legend=1 00:18:30.160 --rc geninfo_all_blocks=1 00:18:30.160 --rc geninfo_unexecuted_blocks=1 00:18:30.160 00:18:30.160 ' 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:30.160 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:30.160 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:30.161 Cannot find device "nvmf_init_br" 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:30.161 Cannot find device "nvmf_init_br2" 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:30.161 Cannot find device "nvmf_tgt_br" 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:30.161 Cannot find device "nvmf_tgt_br2" 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:30.161 Cannot find device "nvmf_init_br" 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:30.161 Cannot find device "nvmf_init_br2" 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:30.161 Cannot find device "nvmf_tgt_br" 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:30.161 Cannot find device "nvmf_tgt_br2" 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:30.161 Cannot find device "nvmf_br" 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:30.161 Cannot find device "nvmf_init_if" 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:30.161 Cannot find device "nvmf_init_if2" 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:18:30.161 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:30.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:30.161 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:18:30.161 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:30.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:30.161 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:18:30.161 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:30.161 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:30.161 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:30.161 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:30.161 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:30.161 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:30.420 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:30.420 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:18:30.420 00:18:30.420 --- 10.0.0.3 ping statistics --- 00:18:30.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.420 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:30.420 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:30.420 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:18:30.420 00:18:30.420 --- 10.0.0.4 ping statistics --- 00:18:30.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.420 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:30.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:30.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:30.420 00:18:30.420 --- 10.0.0.1 ping statistics --- 00:18:30.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.420 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:30.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:30.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:18:30.420 00:18:30.420 --- 10.0.0.2 ping statistics --- 00:18:30.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.420 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # return 0 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:30.420 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:30.421 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:30.421 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:30.421 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:18:30.421 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:30.421 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:30.421 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:30.421 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=76668 00:18:30.421 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 76668 00:18:30.421 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 76668 ']' 00:18:30.421 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:30.421 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.421 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:30.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.421 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.421 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:30.421 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:30.680 [2024-09-28 01:31:26.417324] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:30.680 [2024-09-28 01:31:26.417562] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.680 [2024-09-28 01:31:26.593538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.938 [2024-09-28 01:31:26.825935] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.938 [2024-09-28 01:31:26.826011] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.938 [2024-09-28 01:31:26.826036] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.938 [2024-09-28 01:31:26.826057] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.938 [2024-09-28 01:31:26.826074] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.938 [2024-09-28 01:31:26.826121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.504 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:31.505 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:18:31.505 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:31.505 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:31.505 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:31.764 [2024-09-28 01:31:27.564200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:31.764 Malloc0 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.764 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:31.764 [2024-09-28 01:31:27.695574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.023 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.023 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:18:32.023 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.023 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:32.023 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.023 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:32.023 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.023 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:32.023 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.023 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:32.023 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.023 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:32.023 [2024-09-28 01:31:27.719777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:32.023 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.023 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:32.023 [2024-09-28 01:31:27.954674] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:33.400 Initializing NVMe Controllers 00:18:33.400 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:33.400 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:18:33.400 Initialization complete. Launching workers. 00:18:33.400 ======================================================== 00:18:33.400 Latency(us) 00:18:33.400 Device Information : IOPS MiB/s Average min max 00:18:33.400 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 484.97 60.62 8262.66 6999.81 16030.01 00:18:33.400 ======================================================== 00:18:33.400 Total : 484.97 60.62 8262.66 6999.81 16030.01 00:18:33.400 00:18:33.400 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:18:33.400 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.400 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:33.400 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:18:33.400 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4598 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4598 -eq 0 ]] 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:33.660 rmmod nvme_tcp 00:18:33.660 rmmod nvme_fabrics 00:18:33.660 rmmod nvme_keyring 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 76668 ']' 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 76668 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 76668 ']' 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 76668 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76668 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:33.660 killing process with pid 76668 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76668' 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 76668 00:18:33.660 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 76668 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:34.597 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:18:34.856 00:18:34.856 real 0m4.947s 00:18:34.856 user 0m4.484s 00:18:34.856 sys 0m0.956s 00:18:34.856 ************************************ 00:18:34.856 END TEST nvmf_wait_for_buf 00:18:34.856 ************************************ 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:34.856 ************************************ 00:18:34.856 START TEST nvmf_fuzz 00:18:34.856 ************************************ 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:34.856 * Looking for test storage... 00:18:34.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:34.856 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:35.116 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:35.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.117 --rc genhtml_branch_coverage=1 00:18:35.117 --rc genhtml_function_coverage=1 00:18:35.117 --rc genhtml_legend=1 00:18:35.117 --rc geninfo_all_blocks=1 00:18:35.117 --rc geninfo_unexecuted_blocks=1 00:18:35.117 00:18:35.117 ' 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:35.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.117 --rc genhtml_branch_coverage=1 00:18:35.117 --rc genhtml_function_coverage=1 00:18:35.117 --rc genhtml_legend=1 00:18:35.117 --rc geninfo_all_blocks=1 00:18:35.117 --rc geninfo_unexecuted_blocks=1 00:18:35.117 00:18:35.117 ' 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:35.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.117 --rc genhtml_branch_coverage=1 00:18:35.117 --rc genhtml_function_coverage=1 00:18:35.117 --rc genhtml_legend=1 00:18:35.117 --rc geninfo_all_blocks=1 00:18:35.117 --rc geninfo_unexecuted_blocks=1 00:18:35.117 00:18:35.117 ' 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:35.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.117 --rc genhtml_branch_coverage=1 00:18:35.117 --rc genhtml_function_coverage=1 00:18:35.117 --rc genhtml_legend=1 00:18:35.117 --rc geninfo_all_blocks=1 00:18:35.117 --rc geninfo_unexecuted_blocks=1 00:18:35.117 00:18:35.117 ' 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:35.117 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:35.117 Cannot find device "nvmf_init_br" 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:35.117 Cannot find device "nvmf_init_br2" 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:35.117 Cannot find device "nvmf_tgt_br" 00:18:35.117 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:18:35.118 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:35.118 Cannot find device "nvmf_tgt_br2" 00:18:35.118 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:18:35.118 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:35.118 Cannot find device "nvmf_init_br" 00:18:35.118 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:18:35.118 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:35.118 Cannot find device "nvmf_init_br2" 00:18:35.118 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:18:35.118 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:35.118 Cannot find device "nvmf_tgt_br" 00:18:35.118 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:18:35.118 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:35.118 Cannot find device "nvmf_tgt_br2" 00:18:35.118 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:18:35.118 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:35.118 Cannot find device "nvmf_br" 00:18:35.118 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:18:35.118 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:35.118 Cannot find device "nvmf_init_if" 00:18:35.118 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:18:35.118 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:35.118 Cannot find device "nvmf_init_if2" 00:18:35.118 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:18:35.118 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:35.118 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.118 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:18:35.118 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:35.118 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.118 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:18:35.118 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:35.118 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:35.118 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:35.118 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:35.377 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:35.377 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:35.377 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:18:35.377 00:18:35.377 --- 10.0.0.3 ping statistics --- 00:18:35.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.378 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:35.378 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:35.378 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:35.378 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:18:35.378 00:18:35.378 --- 10.0.0.4 ping statistics --- 00:18:35.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.378 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:35.378 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:35.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:35.378 00:18:35.378 --- 10.0.0.1 ping statistics --- 00:18:35.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.378 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:35.378 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:35.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:18:35.378 00:18:35.378 --- 10.0.0.2 ping statistics --- 00:18:35.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.378 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:18:35.378 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.378 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # return 0 00:18:35.378 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:35.378 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.378 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:35.378 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:35.378 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.378 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:35.378 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:35.637 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:35.637 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=76972 00:18:35.637 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:35.637 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 76972 00:18:35.637 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 76972 ']' 00:18:35.637 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.637 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:35.637 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.637 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:35.637 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:36.574 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:36.574 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:18:36.574 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:36.574 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.574 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:36.574 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.574 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:36.574 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.574 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:36.833 Malloc0 00:18:36.833 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.833 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:36.833 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.833 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:36.833 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.833 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:36.833 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.833 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:36.833 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.833 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:36.833 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.833 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:36.833 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.833 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:18:36.833 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:18:37.770 Shutting down the fuzz application 00:18:37.770 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:38.337 Shutting down the fuzz application 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:38.337 rmmod nvme_tcp 00:18:38.337 rmmod nvme_fabrics 00:18:38.337 rmmod nvme_keyring 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 76972 ']' 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 76972 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 76972 ']' 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 76972 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76972 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76972' 00:18:38.337 killing process with pid 76972 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 76972 00:18:38.337 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 76972 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:39.716 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.975 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:18:39.976 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:39.976 00:18:39.976 real 0m5.019s 00:18:39.976 user 0m5.652s 00:18:39.976 sys 0m0.887s 00:18:39.976 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:39.976 ************************************ 00:18:39.976 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:39.976 END TEST nvmf_fuzz 00:18:39.976 ************************************ 00:18:39.976 01:31:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:39.976 01:31:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:39.976 01:31:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:39.976 01:31:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:39.976 ************************************ 00:18:39.976 START TEST nvmf_multiconnection 00:18:39.976 ************************************ 00:18:39.976 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:39.976 * Looking for test storage... 00:18:39.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:39.976 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:39.976 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:18:39.976 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:40.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.235 --rc genhtml_branch_coverage=1 00:18:40.235 --rc genhtml_function_coverage=1 00:18:40.235 --rc genhtml_legend=1 00:18:40.235 --rc geninfo_all_blocks=1 00:18:40.235 --rc geninfo_unexecuted_blocks=1 00:18:40.235 00:18:40.235 ' 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:40.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.235 --rc genhtml_branch_coverage=1 00:18:40.235 --rc genhtml_function_coverage=1 00:18:40.235 --rc genhtml_legend=1 00:18:40.235 --rc geninfo_all_blocks=1 00:18:40.235 --rc geninfo_unexecuted_blocks=1 00:18:40.235 00:18:40.235 ' 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:40.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.235 --rc genhtml_branch_coverage=1 00:18:40.235 --rc genhtml_function_coverage=1 00:18:40.235 --rc genhtml_legend=1 00:18:40.235 --rc geninfo_all_blocks=1 00:18:40.235 --rc geninfo_unexecuted_blocks=1 00:18:40.235 00:18:40.235 ' 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:40.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.235 --rc genhtml_branch_coverage=1 00:18:40.235 --rc genhtml_function_coverage=1 00:18:40.235 --rc genhtml_legend=1 00:18:40.235 --rc geninfo_all_blocks=1 00:18:40.235 --rc geninfo_unexecuted_blocks=1 00:18:40.235 00:18:40.235 ' 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.235 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:40.236 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:40.236 Cannot find device "nvmf_init_br" 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:40.236 Cannot find device "nvmf_init_br2" 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:18:40.236 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:40.236 Cannot find device "nvmf_tgt_br" 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:40.236 Cannot find device "nvmf_tgt_br2" 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:40.236 Cannot find device "nvmf_init_br" 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:40.236 Cannot find device "nvmf_init_br2" 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:40.236 Cannot find device "nvmf_tgt_br" 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:40.236 Cannot find device "nvmf_tgt_br2" 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:40.236 Cannot find device "nvmf_br" 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:40.236 Cannot find device "nvmf_init_if" 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:40.236 Cannot find device "nvmf_init_if2" 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:40.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:40.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:40.236 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:40.495 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:40.495 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:18:40.495 00:18:40.495 --- 10.0.0.3 ping statistics --- 00:18:40.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.495 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:40.495 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:40.495 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:18:40.495 00:18:40.495 --- 10.0.0.4 ping statistics --- 00:18:40.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.495 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:40.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:40.495 00:18:40.495 --- 10.0.0.1 ping statistics --- 00:18:40.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.495 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:40.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:18:40.495 00:18:40.495 --- 10.0.0.2 ping statistics --- 00:18:40.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.495 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # return 0 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:40.495 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:40.496 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:40.496 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:40.496 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.496 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=77236 00:18:40.496 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:40.496 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 77236 00:18:40.496 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 77236 ']' 00:18:40.496 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.496 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:40.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.496 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.496 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:40.496 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.754 [2024-09-28 01:31:36.513137] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:40.754 [2024-09-28 01:31:36.513363] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.754 [2024-09-28 01:31:36.680241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:41.013 [2024-09-28 01:31:36.885943] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.013 [2024-09-28 01:31:36.886003] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.013 [2024-09-28 01:31:36.886037] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.013 [2024-09-28 01:31:36.886063] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.013 [2024-09-28 01:31:36.886075] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.013 [2024-09-28 01:31:36.886511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.013 [2024-09-28 01:31:36.886698] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.013 [2024-09-28 01:31:36.886829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.013 [2024-09-28 01:31:36.886864] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:41.271 [2024-09-28 01:31:37.077750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:41.838 [2024-09-28 01:31:37.591334] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:41.838 Malloc1 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:41.838 [2024-09-28 01:31:37.707631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.838 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.097 Malloc2 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.097 Malloc3 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.097 Malloc4 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.097 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.097 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.097 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:42.097 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.097 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.097 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.097 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:18:42.097 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.097 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.097 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.097 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.097 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:42.097 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.097 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.355 Malloc5 00:18:42.355 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.355 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:42.355 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.355 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.356 Malloc6 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.356 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.615 Malloc7 00:18:42.615 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.615 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.616 Malloc8 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.616 Malloc9 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.616 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.875 Malloc10 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.875 Malloc11 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.875 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:18:43.134 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:43.134 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:43.134 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:43.134 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:43.134 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:45.038 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:45.038 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:45.038 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:18:45.038 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:45.038 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:45.038 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:45.038 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.039 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:18:45.297 01:31:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:45.298 01:31:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:45.298 01:31:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:45.298 01:31:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:45.298 01:31:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:47.202 01:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:47.202 01:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:47.202 01:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:18:47.202 01:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:47.202 01:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:47.202 01:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:47.202 01:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.202 01:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:18:47.461 01:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:47.461 01:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:47.461 01:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.461 01:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:47.461 01:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:49.367 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:49.367 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:49.367 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:18:49.367 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:49.367 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.367 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:49.367 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.367 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:18:49.626 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:49.626 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:49.626 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:49.626 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:49.626 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:51.528 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:51.528 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:51.528 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:18:51.528 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:51.528 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:51.528 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:51.528 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:51.528 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:18:51.787 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:51.787 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:51.787 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.787 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:51.787 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:53.689 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:53.689 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:53.689 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:18:53.689 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:53.689 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.689 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:53.689 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.689 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:18:53.947 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:53.947 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:53.947 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:53.947 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:53.947 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:55.850 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:55.850 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:55.850 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:18:55.850 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:55.850 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:55.850 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:55.850 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:55.850 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:18:56.109 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:56.109 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:56.109 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:56.109 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:56.109 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:58.013 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:58.013 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:58.013 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:18:58.013 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:58.013 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:58.013 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:58.013 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.013 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:18:58.272 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:58.272 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:58.272 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:58.272 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:58.272 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:00.176 01:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:00.176 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:00.176 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:19:00.176 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:00.176 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:00.176 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:00.176 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.176 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:19:00.434 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:19:00.434 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:00.434 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:00.434 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:00.434 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:02.391 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:02.391 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:02.391 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:19:02.391 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:02.391 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:02.391 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:02.391 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:02.391 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:19:02.651 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:02.651 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:02.651 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.651 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:02.651 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:04.556 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:04.556 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:04.556 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:19:04.556 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:04.556 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.556 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:04.556 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.556 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:19:04.815 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:04.815 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:04.815 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:04.815 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:04.815 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:06.718 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:06.718 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:06.718 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:19:06.718 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:06.718 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:06.718 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:06.718 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:06.718 [global] 00:19:06.718 thread=1 00:19:06.718 invalidate=1 00:19:06.718 rw=read 00:19:06.718 time_based=1 00:19:06.718 runtime=10 00:19:06.718 ioengine=libaio 00:19:06.718 direct=1 00:19:06.718 bs=262144 00:19:06.718 iodepth=64 00:19:06.718 norandommap=1 00:19:06.718 numjobs=1 00:19:06.718 00:19:06.718 [job0] 00:19:06.718 filename=/dev/nvme0n1 00:19:06.718 [job1] 00:19:06.718 filename=/dev/nvme10n1 00:19:06.718 [job2] 00:19:06.718 filename=/dev/nvme1n1 00:19:06.718 [job3] 00:19:06.718 filename=/dev/nvme2n1 00:19:06.718 [job4] 00:19:06.718 filename=/dev/nvme3n1 00:19:06.718 [job5] 00:19:06.718 filename=/dev/nvme4n1 00:19:06.718 [job6] 00:19:06.718 filename=/dev/nvme5n1 00:19:06.718 [job7] 00:19:06.718 filename=/dev/nvme6n1 00:19:06.718 [job8] 00:19:06.718 filename=/dev/nvme7n1 00:19:06.718 [job9] 00:19:06.718 filename=/dev/nvme8n1 00:19:06.718 [job10] 00:19:06.718 filename=/dev/nvme9n1 00:19:06.977 Could not set queue depth (nvme0n1) 00:19:06.977 Could not set queue depth (nvme10n1) 00:19:06.977 Could not set queue depth (nvme1n1) 00:19:06.977 Could not set queue depth (nvme2n1) 00:19:06.977 Could not set queue depth (nvme3n1) 00:19:06.977 Could not set queue depth (nvme4n1) 00:19:06.977 Could not set queue depth (nvme5n1) 00:19:06.977 Could not set queue depth (nvme6n1) 00:19:06.977 Could not set queue depth (nvme7n1) 00:19:06.977 Could not set queue depth (nvme8n1) 00:19:06.977 Could not set queue depth (nvme9n1) 00:19:06.977 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.977 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.977 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.977 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.977 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.977 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.977 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.977 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.977 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.977 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.977 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.977 fio-3.35 00:19:06.977 Starting 11 threads 00:19:19.188 00:19:19.188 job0: (groupid=0, jobs=1): err= 0: pid=77698: Sat Sep 28 01:32:13 2024 00:19:19.188 read: IOPS=160, BW=40.2MiB/s (42.1MB/s)(407MiB/10127msec) 00:19:19.188 slat (usec): min=21, max=103039, avg=6147.75, stdev=14771.65 00:19:19.188 clat (msec): min=109, max=519, avg=391.37, stdev=60.18 00:19:19.188 lat (msec): min=117, max=527, avg=397.52, stdev=60.61 00:19:19.188 clat percentiles (msec): 00:19:19.188 | 1.00th=[ 144], 5.00th=[ 296], 10.00th=[ 330], 20.00th=[ 363], 00:19:19.188 | 30.00th=[ 380], 40.00th=[ 388], 50.00th=[ 401], 60.00th=[ 409], 00:19:19.188 | 70.00th=[ 418], 80.00th=[ 435], 90.00th=[ 456], 95.00th=[ 468], 00:19:19.188 | 99.00th=[ 493], 99.50th=[ 510], 99.90th=[ 518], 99.95th=[ 518], 00:19:19.188 | 99.99th=[ 518] 00:19:19.188 bw ( KiB/s): min=36352, max=44544, per=7.55%, avg=40042.50, stdev=2080.75, samples=20 00:19:19.188 iops : min= 142, max= 174, avg=156.40, stdev= 8.12, samples=20 00:19:19.188 lat (msec) : 250=2.83%, 500=96.25%, 750=0.92% 00:19:19.188 cpu : usr=0.07%, sys=0.77%, ctx=331, majf=0, minf=4097 00:19:19.188 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:19:19.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.188 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.188 issued rwts: total=1628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.188 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.188 job1: (groupid=0, jobs=1): err= 0: pid=77699: Sat Sep 28 01:32:13 2024 00:19:19.188 read: IOPS=101, BW=25.3MiB/s (26.6MB/s)(258MiB/10162msec) 00:19:19.188 slat (usec): min=20, max=304378, avg=9708.96, stdev=27453.28 00:19:19.188 clat (msec): min=17, max=812, avg=620.79, stdev=140.55 00:19:19.188 lat (msec): min=17, max=992, avg=630.49, stdev=142.03 00:19:19.188 clat percentiles (msec): 00:19:19.188 | 1.00th=[ 108], 5.00th=[ 239], 10.00th=[ 527], 20.00th=[ 567], 00:19:19.188 | 30.00th=[ 600], 40.00th=[ 634], 50.00th=[ 659], 60.00th=[ 676], 00:19:19.188 | 70.00th=[ 701], 80.00th=[ 718], 90.00th=[ 743], 95.00th=[ 760], 00:19:19.188 | 99.00th=[ 785], 99.50th=[ 785], 99.90th=[ 810], 99.95th=[ 810], 00:19:19.188 | 99.99th=[ 810] 00:19:19.188 bw ( KiB/s): min= 7680, max=32768, per=4.67%, avg=24755.20, stdev=6085.96, samples=20 00:19:19.188 iops : min= 30, max= 128, avg=96.70, stdev=23.77, samples=20 00:19:19.188 lat (msec) : 20=0.10%, 50=0.58%, 250=4.76%, 500=3.69%, 750=84.08% 00:19:19.188 lat (msec) : 1000=6.80% 00:19:19.188 cpu : usr=0.07%, sys=0.48%, ctx=195, majf=0, minf=4097 00:19:19.188 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:19:19.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.188 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.188 issued rwts: total=1030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.188 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.188 job2: (groupid=0, jobs=1): err= 0: pid=77700: Sat Sep 28 01:32:13 2024 00:19:19.188 read: IOPS=331, BW=83.0MiB/s (87.0MB/s)(836MiB/10080msec) 00:19:19.188 slat (usec): min=21, max=117129, avg=2984.16, stdev=6977.95 00:19:19.188 clat (msec): min=58, max=311, avg=189.63, stdev=21.65 00:19:19.188 lat (msec): min=58, max=311, avg=192.62, stdev=22.03 00:19:19.188 clat percentiles (msec): 00:19:19.188 | 1.00th=[ 100], 5.00th=[ 163], 10.00th=[ 171], 20.00th=[ 180], 00:19:19.188 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:19:19.188 | 70.00th=[ 199], 80.00th=[ 203], 90.00th=[ 209], 95.00th=[ 218], 00:19:19.188 | 99.00th=[ 255], 99.50th=[ 271], 99.90th=[ 296], 99.95th=[ 296], 00:19:19.188 | 99.99th=[ 313] 00:19:19.188 bw ( KiB/s): min=76288, max=87552, per=15.84%, avg=84019.20, stdev=2984.99, samples=20 00:19:19.188 iops : min= 298, max= 342, avg=328.20, stdev=11.66, samples=20 00:19:19.188 lat (msec) : 100=1.23%, 250=97.52%, 500=1.26% 00:19:19.188 cpu : usr=0.12%, sys=1.54%, ctx=735, majf=0, minf=4097 00:19:19.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:19:19.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.188 issued rwts: total=3345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.188 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.188 job3: (groupid=0, jobs=1): err= 0: pid=77701: Sat Sep 28 01:32:13 2024 00:19:19.188 read: IOPS=267, BW=66.8MiB/s (70.1MB/s)(674MiB/10088msec) 00:19:19.188 slat (usec): min=16, max=81115, avg=3705.18, stdev=9129.36 00:19:19.188 clat (msec): min=30, max=326, avg=235.42, stdev=34.67 00:19:19.188 lat (msec): min=30, max=326, avg=239.13, stdev=34.79 00:19:19.188 clat percentiles (msec): 00:19:19.188 | 1.00th=[ 108], 5.00th=[ 182], 10.00th=[ 199], 20.00th=[ 220], 00:19:19.188 | 30.00th=[ 228], 40.00th=[ 234], 50.00th=[ 241], 60.00th=[ 245], 00:19:19.188 | 70.00th=[ 251], 80.00th=[ 259], 90.00th=[ 271], 95.00th=[ 284], 00:19:19.188 | 99.00th=[ 296], 99.50th=[ 305], 99.90th=[ 313], 99.95th=[ 321], 00:19:19.188 | 99.99th=[ 326] 00:19:19.188 bw ( KiB/s): min=60928, max=75264, per=12.72%, avg=67443.10, stdev=2991.83, samples=20 00:19:19.188 iops : min= 238, max= 294, avg=263.30, stdev=11.71, samples=20 00:19:19.188 lat (msec) : 50=0.48%, 100=0.37%, 250=67.41%, 500=31.74% 00:19:19.188 cpu : usr=0.09%, sys=1.19%, ctx=527, majf=0, minf=4097 00:19:19.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:19:19.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.188 issued rwts: total=2697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.188 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.188 job4: (groupid=0, jobs=1): err= 0: pid=77702: Sat Sep 28 01:32:13 2024 00:19:19.188 read: IOPS=99, BW=24.8MiB/s (26.0MB/s)(252MiB/10156msec) 00:19:19.188 slat (usec): min=20, max=342254, avg=9923.66, stdev=30838.84 00:19:19.188 clat (msec): min=18, max=834, avg=633.29, stdev=138.80 00:19:19.188 lat (msec): min=19, max=969, avg=643.22, stdev=140.36 00:19:19.188 clat percentiles (msec): 00:19:19.188 | 1.00th=[ 46], 5.00th=[ 351], 10.00th=[ 506], 20.00th=[ 575], 00:19:19.188 | 30.00th=[ 609], 40.00th=[ 642], 50.00th=[ 667], 60.00th=[ 693], 00:19:19.188 | 70.00th=[ 709], 80.00th=[ 726], 90.00th=[ 760], 95.00th=[ 768], 00:19:19.188 | 99.00th=[ 802], 99.50th=[ 810], 99.90th=[ 835], 99.95th=[ 835], 00:19:19.188 | 99.99th=[ 835] 00:19:19.188 bw ( KiB/s): min=11776, max=32256, per=4.56%, avg=24190.30, stdev=5971.97, samples=20 00:19:19.188 iops : min= 46, max= 126, avg=94.40, stdev=23.34, samples=20 00:19:19.188 lat (msec) : 20=0.10%, 50=1.49%, 100=0.69%, 250=0.79%, 500=6.84% 00:19:19.188 lat (msec) : 750=78.00%, 1000=12.09% 00:19:19.188 cpu : usr=0.05%, sys=0.48%, ctx=185, majf=0, minf=4097 00:19:19.188 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:19:19.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.188 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.188 issued rwts: total=1009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.188 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.188 job5: (groupid=0, jobs=1): err= 0: pid=77703: Sat Sep 28 01:32:13 2024 00:19:19.188 read: IOPS=102, BW=25.6MiB/s (26.9MB/s)(260MiB/10163msec) 00:19:19.188 slat (usec): min=19, max=213418, avg=9354.78, stdev=26570.85 00:19:19.188 clat (msec): min=32, max=823, avg=614.41, stdev=146.05 00:19:19.188 lat (msec): min=32, max=839, avg=623.77, stdev=147.24 00:19:19.188 clat percentiles (msec): 00:19:19.188 | 1.00th=[ 52], 5.00th=[ 215], 10.00th=[ 489], 20.00th=[ 550], 00:19:19.188 | 30.00th=[ 584], 40.00th=[ 609], 50.00th=[ 651], 60.00th=[ 676], 00:19:19.188 | 70.00th=[ 701], 80.00th=[ 718], 90.00th=[ 743], 95.00th=[ 768], 00:19:19.188 | 99.00th=[ 802], 99.50th=[ 818], 99.90th=[ 827], 99.95th=[ 827], 00:19:19.188 | 99.99th=[ 827] 00:19:19.188 bw ( KiB/s): min=14848, max=32833, per=4.72%, avg=25040.05, stdev=5642.58, samples=20 00:19:19.188 iops : min= 58, max= 128, avg=97.80, stdev=22.02, samples=20 00:19:19.188 lat (msec) : 50=0.96%, 100=0.38%, 250=4.13%, 500=4.80%, 750=80.50% 00:19:19.188 lat (msec) : 1000=9.22% 00:19:19.188 cpu : usr=0.08%, sys=0.48%, ctx=228, majf=0, minf=4097 00:19:19.188 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:19:19.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.188 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.188 issued rwts: total=1041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.188 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.188 job6: (groupid=0, jobs=1): err= 0: pid=77704: Sat Sep 28 01:32:13 2024 00:19:19.189 read: IOPS=161, BW=40.3MiB/s (42.3MB/s)(408MiB/10121msec) 00:19:19.189 slat (usec): min=19, max=178521, avg=6123.30, stdev=15749.62 00:19:19.189 clat (msec): min=96, max=538, avg=390.29, stdev=66.01 00:19:19.189 lat (msec): min=96, max=565, avg=396.42, stdev=66.84 00:19:19.189 clat percentiles (msec): 00:19:19.189 | 1.00th=[ 129], 5.00th=[ 279], 10.00th=[ 334], 20.00th=[ 368], 00:19:19.189 | 30.00th=[ 384], 40.00th=[ 393], 50.00th=[ 401], 60.00th=[ 409], 00:19:19.189 | 70.00th=[ 418], 80.00th=[ 426], 90.00th=[ 447], 95.00th=[ 472], 00:19:19.189 | 99.00th=[ 514], 99.50th=[ 542], 99.90th=[ 542], 99.95th=[ 542], 00:19:19.189 | 99.99th=[ 542] 00:19:19.189 bw ( KiB/s): min=36864, max=45568, per=7.57%, avg=40165.95, stdev=2130.34, samples=20 00:19:19.189 iops : min= 144, max= 178, avg=156.75, stdev= 8.35, samples=20 00:19:19.189 lat (msec) : 100=0.61%, 250=4.29%, 500=92.89%, 750=2.21% 00:19:19.189 cpu : usr=0.09%, sys=0.74%, ctx=310, majf=0, minf=4097 00:19:19.189 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:19:19.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.189 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.189 issued rwts: total=1632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.189 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.189 job7: (groupid=0, jobs=1): err= 0: pid=77705: Sat Sep 28 01:32:13 2024 00:19:19.189 read: IOPS=268, BW=67.1MiB/s (70.4MB/s)(677MiB/10094msec) 00:19:19.189 slat (usec): min=21, max=92249, avg=3687.40, stdev=8982.59 00:19:19.189 clat (msec): min=16, max=342, avg=234.47, stdev=34.96 00:19:19.189 lat (msec): min=18, max=342, avg=238.16, stdev=35.30 00:19:19.189 clat percentiles (msec): 00:19:19.189 | 1.00th=[ 85], 5.00th=[ 171], 10.00th=[ 197], 20.00th=[ 218], 00:19:19.189 | 30.00th=[ 228], 40.00th=[ 234], 50.00th=[ 241], 60.00th=[ 245], 00:19:19.189 | 70.00th=[ 249], 80.00th=[ 257], 90.00th=[ 271], 95.00th=[ 279], 00:19:19.189 | 99.00th=[ 305], 99.50th=[ 313], 99.90th=[ 334], 99.95th=[ 342], 00:19:19.189 | 99.99th=[ 342] 00:19:19.189 bw ( KiB/s): min=59392, max=72704, per=12.77%, avg=67737.60, stdev=3601.66, samples=20 00:19:19.189 iops : min= 232, max= 284, avg=264.60, stdev=14.07, samples=20 00:19:19.189 lat (msec) : 20=0.22%, 50=0.18%, 100=0.85%, 250=70.28%, 500=28.46% 00:19:19.189 cpu : usr=0.13%, sys=1.24%, ctx=570, majf=0, minf=4097 00:19:19.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:19:19.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.189 issued rwts: total=2709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.189 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.189 job8: (groupid=0, jobs=1): err= 0: pid=77706: Sat Sep 28 01:32:13 2024 00:19:19.189 read: IOPS=160, BW=40.1MiB/s (42.1MB/s)(407MiB/10125msec) 00:19:19.189 slat (usec): min=19, max=212225, avg=6149.72, stdev=16106.46 00:19:19.189 clat (msec): min=16, max=549, avg=391.82, stdev=52.97 00:19:19.189 lat (msec): min=17, max=549, avg=397.97, stdev=53.65 00:19:19.189 clat percentiles (msec): 00:19:19.189 | 1.00th=[ 155], 5.00th=[ 313], 10.00th=[ 347], 20.00th=[ 372], 00:19:19.189 | 30.00th=[ 384], 40.00th=[ 393], 50.00th=[ 397], 60.00th=[ 405], 00:19:19.189 | 70.00th=[ 414], 80.00th=[ 422], 90.00th=[ 435], 95.00th=[ 451], 00:19:19.189 | 99.00th=[ 489], 99.50th=[ 514], 99.90th=[ 514], 99.95th=[ 550], 00:19:19.189 | 99.99th=[ 550] 00:19:19.189 bw ( KiB/s): min=23040, max=54272, per=7.55%, avg=40012.80, stdev=5495.05, samples=20 00:19:19.189 iops : min= 90, max= 212, avg=156.30, stdev=21.47, samples=20 00:19:19.189 lat (msec) : 20=0.06%, 100=0.74%, 250=1.29%, 500=97.05%, 750=0.86% 00:19:19.189 cpu : usr=0.11%, sys=0.70%, ctx=313, majf=0, minf=4097 00:19:19.189 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:19:19.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.189 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.189 issued rwts: total=1626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.189 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.189 job9: (groupid=0, jobs=1): err= 0: pid=77707: Sat Sep 28 01:32:13 2024 00:19:19.189 read: IOPS=332, BW=83.1MiB/s (87.1MB/s)(838MiB/10084msec) 00:19:19.189 slat (usec): min=20, max=88105, avg=2982.05, stdev=6872.97 00:19:19.189 clat (msec): min=16, max=302, avg=189.28, stdev=22.03 00:19:19.189 lat (msec): min=17, max=302, avg=192.26, stdev=22.35 00:19:19.189 clat percentiles (msec): 00:19:19.189 | 1.00th=[ 90], 5.00th=[ 165], 10.00th=[ 171], 20.00th=[ 180], 00:19:19.189 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:19:19.189 | 70.00th=[ 197], 80.00th=[ 203], 90.00th=[ 209], 95.00th=[ 215], 00:19:19.189 | 99.00th=[ 243], 99.50th=[ 271], 99.90th=[ 288], 99.95th=[ 305], 00:19:19.189 | 99.99th=[ 305] 00:19:19.189 bw ( KiB/s): min=67072, max=88576, per=15.87%, avg=84172.80, stdev=4592.70, samples=20 00:19:19.189 iops : min= 262, max= 346, avg=328.80, stdev=17.94, samples=20 00:19:19.189 lat (msec) : 20=0.09%, 50=0.39%, 100=0.75%, 250=97.79%, 500=0.98% 00:19:19.189 cpu : usr=0.15%, sys=1.54%, ctx=707, majf=0, minf=4097 00:19:19.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:19:19.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.189 issued rwts: total=3352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.189 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.189 job10: (groupid=0, jobs=1): err= 0: pid=77708: Sat Sep 28 01:32:13 2024 00:19:19.189 read: IOPS=96, BW=24.2MiB/s (25.4MB/s)(246MiB/10151msec) 00:19:19.189 slat (usec): min=21, max=292320, avg=10190.13, stdev=29895.83 00:19:19.189 clat (msec): min=138, max=932, avg=650.84, stdev=134.58 00:19:19.189 lat (msec): min=173, max=932, avg=661.03, stdev=135.20 00:19:19.189 clat percentiles (msec): 00:19:19.189 | 1.00th=[ 176], 5.00th=[ 426], 10.00th=[ 481], 20.00th=[ 567], 00:19:19.189 | 30.00th=[ 617], 40.00th=[ 651], 50.00th=[ 684], 60.00th=[ 709], 00:19:19.189 | 70.00th=[ 726], 80.00th=[ 751], 90.00th=[ 785], 95.00th=[ 802], 00:19:19.189 | 99.00th=[ 844], 99.50th=[ 877], 99.90th=[ 936], 99.95th=[ 936], 00:19:19.189 | 99.99th=[ 936] 00:19:19.189 bw ( KiB/s): min= 9216, max=32256, per=4.43%, avg=23516.90, stdev=7029.18, samples=20 00:19:19.189 iops : min= 36, max= 126, avg=91.80, stdev=27.44, samples=20 00:19:19.189 lat (msec) : 250=3.56%, 500=8.25%, 750=67.92%, 1000=20.26% 00:19:19.189 cpu : usr=0.05%, sys=0.45%, ctx=206, majf=0, minf=4097 00:19:19.189 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.6% 00:19:19.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.189 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.189 issued rwts: total=982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.189 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.189 00:19:19.189 Run status group 0 (all jobs): 00:19:19.189 READ: bw=518MiB/s (543MB/s), 24.2MiB/s-83.1MiB/s (25.4MB/s-87.1MB/s), io=5263MiB (5518MB), run=10080-10163msec 00:19:19.189 00:19:19.189 Disk stats (read/write): 00:19:19.189 nvme0n1: ios=3132/0, merge=0/0, ticks=1223784/0, in_queue=1223784, util=97.86% 00:19:19.189 nvme10n1: ios=1940/0, merge=0/0, ticks=1203947/0, in_queue=1203947, util=98.00% 00:19:19.189 nvme1n1: ios=6582/0, merge=0/0, ticks=1234170/0, in_queue=1234170, util=98.18% 00:19:19.189 nvme2n1: ios=5272/0, merge=0/0, ticks=1232926/0, in_queue=1232926, util=98.24% 00:19:19.189 nvme3n1: ios=1893/0, merge=0/0, ticks=1202763/0, in_queue=1202763, util=98.31% 00:19:19.189 nvme4n1: ios=1961/0, merge=0/0, ticks=1201381/0, in_queue=1201381, util=98.48% 00:19:19.189 nvme5n1: ios=3144/0, merge=0/0, ticks=1224779/0, in_queue=1224779, util=98.62% 00:19:19.189 nvme6n1: ios=5297/0, merge=0/0, ticks=1232297/0, in_queue=1232297, util=98.70% 00:19:19.189 nvme7n1: ios=3131/0, merge=0/0, ticks=1220037/0, in_queue=1220037, util=99.03% 00:19:19.189 nvme8n1: ios=6590/0, merge=0/0, ticks=1234518/0, in_queue=1234518, util=99.12% 00:19:19.189 nvme9n1: ios=1836/0, merge=0/0, ticks=1205897/0, in_queue=1205897, util=99.09% 00:19:19.189 01:32:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:19.189 [global] 00:19:19.189 thread=1 00:19:19.189 invalidate=1 00:19:19.189 rw=randwrite 00:19:19.189 time_based=1 00:19:19.189 runtime=10 00:19:19.189 ioengine=libaio 00:19:19.189 direct=1 00:19:19.189 bs=262144 00:19:19.189 iodepth=64 00:19:19.189 norandommap=1 00:19:19.189 numjobs=1 00:19:19.189 00:19:19.189 [job0] 00:19:19.189 filename=/dev/nvme0n1 00:19:19.189 [job1] 00:19:19.189 filename=/dev/nvme10n1 00:19:19.189 [job2] 00:19:19.189 filename=/dev/nvme1n1 00:19:19.189 [job3] 00:19:19.189 filename=/dev/nvme2n1 00:19:19.189 [job4] 00:19:19.189 filename=/dev/nvme3n1 00:19:19.189 [job5] 00:19:19.189 filename=/dev/nvme4n1 00:19:19.189 [job6] 00:19:19.189 filename=/dev/nvme5n1 00:19:19.189 [job7] 00:19:19.189 filename=/dev/nvme6n1 00:19:19.189 [job8] 00:19:19.189 filename=/dev/nvme7n1 00:19:19.189 [job9] 00:19:19.189 filename=/dev/nvme8n1 00:19:19.189 [job10] 00:19:19.189 filename=/dev/nvme9n1 00:19:19.189 Could not set queue depth (nvme0n1) 00:19:19.189 Could not set queue depth (nvme10n1) 00:19:19.189 Could not set queue depth (nvme1n1) 00:19:19.189 Could not set queue depth (nvme2n1) 00:19:19.189 Could not set queue depth (nvme3n1) 00:19:19.189 Could not set queue depth (nvme4n1) 00:19:19.189 Could not set queue depth (nvme5n1) 00:19:19.189 Could not set queue depth (nvme6n1) 00:19:19.189 Could not set queue depth (nvme7n1) 00:19:19.189 Could not set queue depth (nvme8n1) 00:19:19.189 Could not set queue depth (nvme9n1) 00:19:19.189 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.189 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.189 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.189 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.189 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.189 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.190 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.190 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.190 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.190 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.190 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.190 fio-3.35 00:19:19.190 Starting 11 threads 00:19:29.171 00:19:29.171 job0: (groupid=0, jobs=1): err= 0: pid=77902: Sat Sep 28 01:32:24 2024 00:19:29.171 write: IOPS=492, BW=123MiB/s (129MB/s)(1240MiB/10070msec); 0 zone resets 00:19:29.171 slat (usec): min=19, max=100625, avg=1921.74, stdev=5372.29 00:19:29.171 clat (msec): min=2, max=506, avg=127.95, stdev=120.37 00:19:29.172 lat (msec): min=2, max=506, avg=129.87, stdev=122.16 00:19:29.172 clat percentiles (msec): 00:19:29.172 | 1.00th=[ 10], 5.00th=[ 46], 10.00th=[ 84], 20.00th=[ 86], 00:19:29.172 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 91], 00:19:29.172 | 70.00th=[ 92], 80.00th=[ 93], 90.00th=[ 363], 95.00th=[ 481], 00:19:29.172 | 99.00th=[ 498], 99.50th=[ 502], 99.90th=[ 506], 99.95th=[ 506], 00:19:29.172 | 99.99th=[ 506] 00:19:29.172 bw ( KiB/s): min=32768, max=225280, per=16.33%, avg=125388.80, stdev=74787.04, samples=20 00:19:29.172 iops : min= 128, max= 880, avg=489.80, stdev=292.14, samples=20 00:19:29.172 lat (msec) : 4=0.08%, 10=0.95%, 20=1.25%, 50=3.18%, 100=81.68% 00:19:29.172 lat (msec) : 250=1.33%, 500=11.05%, 750=0.48% 00:19:29.172 cpu : usr=0.80%, sys=1.20%, ctx=2329, majf=0, minf=1 00:19:29.172 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:19:29.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.172 issued rwts: total=0,4961,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.172 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.172 job1: (groupid=0, jobs=1): err= 0: pid=77903: Sat Sep 28 01:32:24 2024 00:19:29.172 write: IOPS=209, BW=52.4MiB/s (54.9MB/s)(534MiB/10204msec); 0 zone resets 00:19:29.172 slat (usec): min=17, max=193635, avg=4677.67, stdev=9058.90 00:19:29.172 clat (msec): min=129, max=508, avg=300.77, stdev=25.81 00:19:29.172 lat (msec): min=141, max=508, avg=305.45, stdev=24.90 00:19:29.172 clat percentiles (msec): 00:19:29.172 | 1.00th=[ 222], 5.00th=[ 271], 10.00th=[ 275], 20.00th=[ 292], 00:19:29.172 | 30.00th=[ 292], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 309], 00:19:29.172 | 70.00th=[ 313], 80.00th=[ 317], 90.00th=[ 317], 95.00th=[ 321], 00:19:29.172 | 99.00th=[ 414], 99.50th=[ 451], 99.90th=[ 489], 99.95th=[ 510], 00:19:29.172 | 99.99th=[ 510] 00:19:29.172 bw ( KiB/s): min=43094, max=57344, per=6.92%, avg=53098.70, stdev=3190.11, samples=20 00:19:29.172 iops : min= 168, max= 224, avg=207.40, stdev=12.52, samples=20 00:19:29.172 lat (msec) : 250=2.25%, 500=97.66%, 750=0.09% 00:19:29.172 cpu : usr=0.40%, sys=0.62%, ctx=1718, majf=0, minf=1 00:19:29.172 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:19:29.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.172 issued rwts: total=0,2137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.172 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.172 job2: (groupid=0, jobs=1): err= 0: pid=77914: Sat Sep 28 01:32:24 2024 00:19:29.172 write: IOPS=160, BW=40.0MiB/s (41.9MB/s)(411MiB/10261msec); 0 zone resets 00:19:29.172 slat (usec): min=17, max=87153, avg=6089.12, stdev=11207.77 00:19:29.172 clat (msec): min=73, max=614, avg=393.62, stdev=62.13 00:19:29.172 lat (msec): min=73, max=614, avg=399.71, stdev=62.22 00:19:29.172 clat percentiles (msec): 00:19:29.172 | 1.00th=[ 167], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 355], 00:19:29.172 | 30.00th=[ 363], 40.00th=[ 368], 50.00th=[ 372], 60.00th=[ 376], 00:19:29.172 | 70.00th=[ 439], 80.00th=[ 460], 90.00th=[ 477], 95.00th=[ 485], 00:19:29.172 | 99.00th=[ 523], 99.50th=[ 567], 99.90th=[ 617], 99.95th=[ 617], 00:19:29.172 | 99.99th=[ 617] 00:19:29.172 bw ( KiB/s): min=32768, max=45568, per=5.27%, avg=40422.40, stdev=4855.77, samples=20 00:19:29.172 iops : min= 128, max= 178, avg=157.90, stdev=18.97, samples=20 00:19:29.172 lat (msec) : 100=0.30%, 250=1.46%, 500=96.89%, 750=1.34% 00:19:29.172 cpu : usr=0.35%, sys=0.42%, ctx=617, majf=0, minf=1 00:19:29.172 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:19:29.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.172 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.172 issued rwts: total=0,1642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.172 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.172 job3: (groupid=0, jobs=1): err= 0: pid=77916: Sat Sep 28 01:32:24 2024 00:19:29.172 write: IOPS=155, BW=39.0MiB/s (40.9MB/s)(400MiB/10260msec); 0 zone resets 00:19:29.172 slat (usec): min=15, max=241480, avg=6252.42, stdev=12815.64 00:19:29.172 clat (msec): min=243, max=608, avg=403.93, stdev=62.10 00:19:29.172 lat (msec): min=243, max=608, avg=410.18, stdev=61.90 00:19:29.172 clat percentiles (msec): 00:19:29.172 | 1.00th=[ 279], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 359], 00:19:29.172 | 30.00th=[ 363], 40.00th=[ 368], 50.00th=[ 372], 60.00th=[ 380], 00:19:29.172 | 70.00th=[ 456], 80.00th=[ 481], 90.00th=[ 493], 95.00th=[ 502], 00:19:29.172 | 99.00th=[ 550], 99.50th=[ 575], 99.90th=[ 609], 99.95th=[ 609], 00:19:29.172 | 99.99th=[ 609] 00:19:29.172 bw ( KiB/s): min=24576, max=45568, per=5.12%, avg=39333.75, stdev=6337.55, samples=20 00:19:29.172 iops : min= 96, max= 178, avg=153.60, stdev=24.71, samples=20 00:19:29.172 lat (msec) : 250=0.31%, 500=94.62%, 750=5.06% 00:19:29.172 cpu : usr=0.29%, sys=0.44%, ctx=1723, majf=0, minf=1 00:19:29.172 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:19:29.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.172 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.172 issued rwts: total=0,1600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.172 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.172 job4: (groupid=0, jobs=1): err= 0: pid=77917: Sat Sep 28 01:32:24 2024 00:19:29.172 write: IOPS=159, BW=39.8MiB/s (41.8MB/s)(409MiB/10261msec); 0 zone resets 00:19:29.172 slat (usec): min=16, max=102821, avg=6069.92, stdev=11236.06 00:19:29.172 clat (msec): min=44, max=613, avg=395.31, stdev=71.27 00:19:29.172 lat (msec): min=44, max=613, avg=401.38, stdev=71.67 00:19:29.172 clat percentiles (msec): 00:19:29.172 | 1.00th=[ 131], 5.00th=[ 334], 10.00th=[ 342], 20.00th=[ 355], 00:19:29.172 | 30.00th=[ 363], 40.00th=[ 368], 50.00th=[ 372], 60.00th=[ 380], 00:19:29.172 | 70.00th=[ 439], 80.00th=[ 477], 90.00th=[ 489], 95.00th=[ 498], 00:19:29.172 | 99.00th=[ 523], 99.50th=[ 567], 99.90th=[ 617], 99.95th=[ 617], 00:19:29.172 | 99.99th=[ 617] 00:19:29.172 bw ( KiB/s): min=32768, max=45568, per=5.24%, avg=40247.00, stdev=5062.76, samples=20 00:19:29.172 iops : min= 128, max= 178, avg=157.20, stdev=19.78, samples=20 00:19:29.172 lat (msec) : 50=0.24%, 100=0.49%, 250=1.77%, 500=94.56%, 750=2.94% 00:19:29.172 cpu : usr=0.25%, sys=0.58%, ctx=1875, majf=0, minf=1 00:19:29.172 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:19:29.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.172 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.172 issued rwts: total=0,1635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.172 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.172 job5: (groupid=0, jobs=1): err= 0: pid=77918: Sat Sep 28 01:32:24 2024 00:19:29.172 write: IOPS=159, BW=39.8MiB/s (41.8MB/s)(409MiB/10266msec); 0 zone resets 00:19:29.172 slat (usec): min=20, max=175086, avg=6118.07, stdev=11796.17 00:19:29.172 clat (msec): min=177, max=608, avg=395.26, stdev=55.17 00:19:29.172 lat (msec): min=177, max=608, avg=401.38, stdev=54.91 00:19:29.172 clat percentiles (msec): 00:19:29.172 | 1.00th=[ 247], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 359], 00:19:29.172 | 30.00th=[ 363], 40.00th=[ 368], 50.00th=[ 372], 60.00th=[ 380], 00:19:29.172 | 70.00th=[ 435], 80.00th=[ 456], 90.00th=[ 472], 95.00th=[ 485], 00:19:29.172 | 99.00th=[ 518], 99.50th=[ 567], 99.90th=[ 609], 99.95th=[ 609], 00:19:29.172 | 99.99th=[ 609] 00:19:29.172 bw ( KiB/s): min=30781, max=47104, per=5.24%, avg=40246.25, stdev=5279.71, samples=20 00:19:29.172 iops : min= 120, max= 184, avg=157.20, stdev=20.65, samples=20 00:19:29.172 lat (msec) : 250=1.04%, 500=97.68%, 750=1.28% 00:19:29.172 cpu : usr=0.31%, sys=0.49%, ctx=1996, majf=0, minf=1 00:19:29.172 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:19:29.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.172 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.172 issued rwts: total=0,1636,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.172 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.172 job6: (groupid=0, jobs=1): err= 0: pid=77919: Sat Sep 28 01:32:24 2024 00:19:29.172 write: IOPS=206, BW=51.6MiB/s (54.2MB/s)(527MiB/10204msec); 0 zone resets 00:19:29.172 slat (usec): min=17, max=284109, avg=4746.04, stdev=10144.72 00:19:29.172 clat (msec): min=77, max=533, avg=304.92, stdev=31.63 00:19:29.172 lat (msec): min=77, max=533, avg=309.66, stdev=29.90 00:19:29.172 clat percentiles (msec): 00:19:29.172 | 1.00th=[ 257], 5.00th=[ 275], 10.00th=[ 279], 20.00th=[ 292], 00:19:29.173 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 309], 00:19:29.173 | 70.00th=[ 313], 80.00th=[ 317], 90.00th=[ 317], 95.00th=[ 321], 00:19:29.173 | 99.00th=[ 468], 99.50th=[ 502], 99.90th=[ 514], 99.95th=[ 535], 00:19:29.173 | 99.99th=[ 535] 00:19:29.173 bw ( KiB/s): min=28729, max=57344, per=6.82%, avg=52329.25, stdev=5911.59, samples=20 00:19:29.173 iops : min= 112, max= 224, avg=204.40, stdev=23.14, samples=20 00:19:29.173 lat (msec) : 100=0.14%, 250=0.76%, 500=98.72%, 750=0.38% 00:19:29.173 cpu : usr=0.40%, sys=0.59%, ctx=1830, majf=0, minf=1 00:19:29.173 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:19:29.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.173 issued rwts: total=0,2108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.173 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.173 job7: (groupid=0, jobs=1): err= 0: pid=77920: Sat Sep 28 01:32:24 2024 00:19:29.173 write: IOPS=905, BW=226MiB/s (237MB/s)(2279MiB/10064msec); 0 zone resets 00:19:29.173 slat (usec): min=16, max=8249, avg=1092.22, stdev=1842.06 00:19:29.173 clat (msec): min=10, max=130, avg=69.55, stdev= 4.47 00:19:29.173 lat (msec): min=10, max=130, avg=70.64, stdev= 4.19 00:19:29.173 clat percentiles (msec): 00:19:29.173 | 1.00th=[ 65], 5.00th=[ 66], 10.00th=[ 67], 20.00th=[ 67], 00:19:29.173 | 30.00th=[ 69], 40.00th=[ 70], 50.00th=[ 70], 60.00th=[ 71], 00:19:29.173 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 72], 95.00th=[ 73], 00:19:29.173 | 99.00th=[ 74], 99.50th=[ 83], 99.90th=[ 122], 99.95th=[ 126], 00:19:29.173 | 99.99th=[ 131] 00:19:29.173 bw ( KiB/s): min=224768, max=236032, per=30.19%, avg=231731.20, stdev=2731.56, samples=20 00:19:29.173 iops : min= 878, max= 922, avg=905.20, stdev=10.67, samples=20 00:19:29.173 lat (msec) : 20=0.13%, 50=0.31%, 100=99.28%, 250=0.29% 00:19:29.173 cpu : usr=1.35%, sys=2.07%, ctx=9174, majf=0, minf=2 00:19:29.173 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:29.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.173 issued rwts: total=0,9115,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.173 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.173 job8: (groupid=0, jobs=1): err= 0: pid=77921: Sat Sep 28 01:32:24 2024 00:19:29.173 write: IOPS=210, BW=52.6MiB/s (55.2MB/s)(537MiB/10202msec); 0 zone resets 00:19:29.173 slat (usec): min=17, max=85083, avg=4654.48, stdev=8368.87 00:19:29.173 clat (msec): min=87, max=508, avg=299.19, stdev=31.78 00:19:29.173 lat (msec): min=87, max=508, avg=303.84, stdev=31.32 00:19:29.173 clat percentiles (msec): 00:19:29.173 | 1.00th=[ 155], 5.00th=[ 271], 10.00th=[ 275], 20.00th=[ 292], 00:19:29.173 | 30.00th=[ 292], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 309], 00:19:29.173 | 70.00th=[ 313], 80.00th=[ 317], 90.00th=[ 317], 95.00th=[ 321], 00:19:29.173 | 99.00th=[ 414], 99.50th=[ 451], 99.90th=[ 489], 99.95th=[ 510], 00:19:29.173 | 99.99th=[ 510] 00:19:29.173 bw ( KiB/s): min=49152, max=57344, per=6.95%, avg=53376.00, stdev=2417.95, samples=20 00:19:29.173 iops : min= 192, max= 224, avg=208.50, stdev= 9.45, samples=20 00:19:29.173 lat (msec) : 100=0.19%, 250=2.75%, 500=96.97%, 750=0.09% 00:19:29.173 cpu : usr=0.37%, sys=0.47%, ctx=2549, majf=0, minf=1 00:19:29.173 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:19:29.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.173 issued rwts: total=0,2148,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.173 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.173 job9: (groupid=0, jobs=1): err= 0: pid=77922: Sat Sep 28 01:32:24 2024 00:19:29.173 write: IOPS=210, BW=52.6MiB/s (55.2MB/s)(537MiB/10204msec); 0 zone resets 00:19:29.173 slat (usec): min=17, max=153650, avg=4652.63, stdev=8709.78 00:19:29.173 clat (msec): min=23, max=511, avg=299.24, stdev=44.21 00:19:29.173 lat (msec): min=23, max=511, avg=303.89, stdev=44.12 00:19:29.173 clat percentiles (msec): 00:19:29.173 | 1.00th=[ 65], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 292], 00:19:29.173 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 309], 00:19:29.173 | 70.00th=[ 313], 80.00th=[ 317], 90.00th=[ 317], 95.00th=[ 321], 00:19:29.173 | 99.00th=[ 414], 99.50th=[ 456], 99.90th=[ 493], 99.95th=[ 510], 00:19:29.173 | 99.99th=[ 510] 00:19:29.173 bw ( KiB/s): min=51200, max=57344, per=6.95%, avg=53381.10, stdev=2121.12, samples=20 00:19:29.173 iops : min= 200, max= 224, avg=208.50, stdev= 8.31, samples=20 00:19:29.173 lat (msec) : 50=0.74%, 100=0.93%, 250=1.54%, 500=96.69%, 750=0.09% 00:19:29.173 cpu : usr=0.47%, sys=0.53%, ctx=1793, majf=0, minf=1 00:19:29.173 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:19:29.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.173 issued rwts: total=0,2148,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.173 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.173 job10: (groupid=0, jobs=1): err= 0: pid=77923: Sat Sep 28 01:32:24 2024 00:19:29.173 write: IOPS=161, BW=40.3MiB/s (42.2MB/s)(413MiB/10265msec); 0 zone resets 00:19:29.173 slat (usec): min=15, max=88501, avg=5918.30, stdev=10913.20 00:19:29.173 clat (msec): min=36, max=608, avg=391.28, stdev=67.56 00:19:29.173 lat (msec): min=36, max=608, avg=397.20, stdev=67.80 00:19:29.173 clat percentiles (msec): 00:19:29.173 | 1.00th=[ 125], 5.00th=[ 334], 10.00th=[ 342], 20.00th=[ 355], 00:19:29.173 | 30.00th=[ 363], 40.00th=[ 368], 50.00th=[ 372], 60.00th=[ 376], 00:19:29.173 | 70.00th=[ 439], 80.00th=[ 464], 90.00th=[ 477], 95.00th=[ 481], 00:19:29.173 | 99.00th=[ 518], 99.50th=[ 567], 99.90th=[ 609], 99.95th=[ 609], 00:19:29.173 | 99.99th=[ 609] 00:19:29.173 bw ( KiB/s): min=32768, max=47104, per=5.30%, avg=40678.40, stdev=4763.98, samples=20 00:19:29.173 iops : min= 128, max= 184, avg=158.90, stdev=18.61, samples=20 00:19:29.173 lat (msec) : 50=0.24%, 100=0.73%, 250=1.51%, 500=96.43%, 750=1.09% 00:19:29.173 cpu : usr=0.31%, sys=0.48%, ctx=848, majf=0, minf=1 00:19:29.173 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:19:29.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.173 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.173 issued rwts: total=0,1653,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.173 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.173 00:19:29.173 Run status group 0 (all jobs): 00:19:29.173 WRITE: bw=750MiB/s (786MB/s), 39.0MiB/s-226MiB/s (40.9MB/s-237MB/s), io=7696MiB (8070MB), run=10064-10266msec 00:19:29.173 00:19:29.173 Disk stats (read/write): 00:19:29.173 nvme0n1: ios=50/9795, merge=0/0, ticks=41/1218364, in_queue=1218405, util=97.85% 00:19:29.173 nvme10n1: ios=49/4142, merge=0/0, ticks=71/1204041, in_queue=1204112, util=98.06% 00:19:29.173 nvme1n1: ios=45/3271, merge=0/0, ticks=52/1237638, in_queue=1237690, util=98.16% 00:19:29.173 nvme2n1: ios=36/3182, merge=0/0, ticks=44/1236811, in_queue=1236855, util=98.18% 00:19:29.173 nvme3n1: ios=31/3261, merge=0/0, ticks=62/1238659, in_queue=1238721, util=98.38% 00:19:29.173 nvme4n1: ios=0/3254, merge=0/0, ticks=0/1237600, in_queue=1237600, util=98.21% 00:19:29.173 nvme5n1: ios=0/4080, merge=0/0, ticks=0/1204183, in_queue=1204183, util=98.31% 00:19:29.173 nvme6n1: ios=0/18077, merge=0/0, ticks=0/1216187, in_queue=1216187, util=98.44% 00:19:29.173 nvme7n1: ios=0/4164, merge=0/0, ticks=0/1203327, in_queue=1203327, util=98.68% 00:19:29.173 nvme8n1: ios=0/4166, merge=0/0, ticks=0/1203058, in_queue=1203058, util=98.81% 00:19:29.173 nvme9n1: ios=0/3291, merge=0/0, ticks=0/1238235, in_queue=1238235, util=98.91% 00:19:29.173 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:19:29.173 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:19:29.173 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.173 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:29.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:29.173 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:29.173 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:29.173 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:29.173 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:19:29.173 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:29.173 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:19:29.173 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:29.173 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:29.174 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:29.174 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:29.174 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:29.174 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:29.174 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:29.174 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.174 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:29.174 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:29.174 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:29.174 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:29.174 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:29.174 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:19:29.174 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:29.175 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:19:29.175 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:29.175 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:29.175 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.175 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:29.175 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.175 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.175 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:29.434 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:29.434 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:29.434 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:29.434 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:29.435 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.435 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:29.435 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.435 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:29.435 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:29.435 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:29.435 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:29.435 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:19:29.694 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:29.694 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:19:29.694 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:29.694 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:29.694 rmmod nvme_tcp 00:19:29.694 rmmod nvme_fabrics 00:19:29.694 rmmod nvme_keyring 00:19:29.694 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:29.694 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:19:29.694 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:19:29.694 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 77236 ']' 00:19:29.694 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 77236 00:19:29.694 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 77236 ']' 00:19:29.694 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 77236 00:19:29.694 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:19:29.695 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:29.695 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77236 00:19:29.695 killing process with pid 77236 00:19:29.695 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:29.695 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:29.695 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77236' 00:19:29.695 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 77236 00:19:29.695 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 77236 00:19:32.228 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:32.228 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:32.229 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:32.229 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:19:32.229 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:19:32.229 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:32.229 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:19:32.229 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:32.229 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:32.229 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:32.229 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:32.229 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:32.229 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:32.229 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:32.229 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:32.229 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:32.229 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:32.229 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:32.229 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:32.229 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:32.229 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:32.487 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:32.487 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:32.487 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.487 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.487 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.487 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:19:32.487 00:19:32.487 real 0m52.510s 00:19:32.487 user 3m0.840s 00:19:32.487 sys 0m24.365s 00:19:32.487 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:32.487 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:32.487 ************************************ 00:19:32.487 END TEST nvmf_multiconnection 00:19:32.487 ************************************ 00:19:32.487 01:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:32.487 01:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:32.487 01:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:32.487 01:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:32.487 ************************************ 00:19:32.487 START TEST nvmf_initiator_timeout 00:19:32.487 ************************************ 00:19:32.487 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:32.487 * Looking for test storage... 00:19:32.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:32.487 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:32.487 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:19:32.487 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:32.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.747 --rc genhtml_branch_coverage=1 00:19:32.747 --rc genhtml_function_coverage=1 00:19:32.747 --rc genhtml_legend=1 00:19:32.747 --rc geninfo_all_blocks=1 00:19:32.747 --rc geninfo_unexecuted_blocks=1 00:19:32.747 00:19:32.747 ' 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:32.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.747 --rc genhtml_branch_coverage=1 00:19:32.747 --rc genhtml_function_coverage=1 00:19:32.747 --rc genhtml_legend=1 00:19:32.747 --rc geninfo_all_blocks=1 00:19:32.747 --rc geninfo_unexecuted_blocks=1 00:19:32.747 00:19:32.747 ' 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:32.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.747 --rc genhtml_branch_coverage=1 00:19:32.747 --rc genhtml_function_coverage=1 00:19:32.747 --rc genhtml_legend=1 00:19:32.747 --rc geninfo_all_blocks=1 00:19:32.747 --rc geninfo_unexecuted_blocks=1 00:19:32.747 00:19:32.747 ' 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:32.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.747 --rc genhtml_branch_coverage=1 00:19:32.747 --rc genhtml_function_coverage=1 00:19:32.747 --rc genhtml_legend=1 00:19:32.747 --rc geninfo_all_blocks=1 00:19:32.747 --rc geninfo_unexecuted_blocks=1 00:19:32.747 00:19:32.747 ' 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.747 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:32.748 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:32.748 Cannot find device "nvmf_init_br" 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:32.748 Cannot find device "nvmf_init_br2" 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:32.748 Cannot find device "nvmf_tgt_br" 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:32.748 Cannot find device "nvmf_tgt_br2" 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:32.748 Cannot find device "nvmf_init_br" 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:32.748 Cannot find device "nvmf_init_br2" 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:32.748 Cannot find device "nvmf_tgt_br" 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:32.748 Cannot find device "nvmf_tgt_br2" 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:32.748 Cannot find device "nvmf_br" 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:32.748 Cannot find device "nvmf_init_if" 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:32.748 Cannot find device "nvmf_init_if2" 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:32.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:32.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:32.748 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:33.007 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:33.007 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:19:33.007 00:19:33.007 --- 10.0.0.3 ping statistics --- 00:19:33.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.007 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:33.007 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:33.007 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:19:33.007 00:19:33.007 --- 10.0.0.4 ping statistics --- 00:19:33.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.007 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:33.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:19:33.007 00:19:33.007 --- 10.0.0.1 ping statistics --- 00:19:33.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.007 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:33.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:19:33.007 00:19:33.007 --- 10.0.0.2 ping statistics --- 00:19:33.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.007 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # return 0 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=78366 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 78366 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 78366 ']' 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:33.007 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:33.265 [2024-09-28 01:32:29.038994] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:33.265 [2024-09-28 01:32:29.039185] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.524 [2024-09-28 01:32:29.217611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:33.524 [2024-09-28 01:32:29.449377] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.524 [2024-09-28 01:32:29.449501] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.524 [2024-09-28 01:32:29.449531] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.524 [2024-09-28 01:32:29.449548] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.524 [2024-09-28 01:32:29.449564] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.524 [2024-09-28 01:32:29.449766] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.524 [2024-09-28 01:32:29.450397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.524 [2024-09-28 01:32:29.450608] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.524 [2024-09-28 01:32:29.450628] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:33.782 [2024-09-28 01:32:29.632827] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:34.040 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.040 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:34.040 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:34.040 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:34.040 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:34.040 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.040 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:34.040 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:34.040 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.040 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:34.300 Malloc0 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:34.300 Delay0 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:34.300 [2024-09-28 01:32:30.053219] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:34.300 [2024-09-28 01:32:30.085393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.300 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:19:34.558 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:34.558 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:19:34.558 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:34.558 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:34.558 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:19:36.458 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:36.458 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:36.458 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:36.458 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:36.458 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:36.458 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:19:36.458 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=78426 00:19:36.458 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:36.458 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:36.458 [global] 00:19:36.458 thread=1 00:19:36.458 invalidate=1 00:19:36.458 rw=write 00:19:36.458 time_based=1 00:19:36.458 runtime=60 00:19:36.458 ioengine=libaio 00:19:36.458 direct=1 00:19:36.458 bs=4096 00:19:36.458 iodepth=1 00:19:36.458 norandommap=0 00:19:36.458 numjobs=1 00:19:36.458 00:19:36.458 verify_dump=1 00:19:36.458 verify_backlog=512 00:19:36.458 verify_state_save=0 00:19:36.458 do_verify=1 00:19:36.458 verify=crc32c-intel 00:19:36.458 [job0] 00:19:36.458 filename=/dev/nvme0n1 00:19:36.458 Could not set queue depth (nvme0n1) 00:19:36.718 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:36.718 fio-3.35 00:19:36.718 Starting 1 thread 00:19:40.059 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:40.059 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.059 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:40.059 true 00:19:40.059 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.059 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:40.059 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.059 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:40.059 true 00:19:40.059 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.059 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:40.059 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.059 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:40.059 true 00:19:40.059 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.059 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:40.059 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.059 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:40.059 true 00:19:40.059 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.059 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:42.593 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:42.593 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.593 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:42.593 true 00:19:42.593 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.593 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:42.593 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.593 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:42.593 true 00:19:42.593 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.593 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:42.593 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.593 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:42.593 true 00:19:42.593 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.593 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:42.593 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.593 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:42.593 true 00:19:42.594 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.594 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:42.594 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 78426 00:20:38.822 00:20:38.822 job0: (groupid=0, jobs=1): err= 0: pid=78447: Sat Sep 28 01:33:32 2024 00:20:38.822 read: IOPS=750, BW=3004KiB/s (3076kB/s)(176MiB/60000msec) 00:20:38.822 slat (usec): min=11, max=471, avg=14.17, stdev= 4.46 00:20:38.822 clat (usec): min=2, max=1805, avg=223.65, stdev=24.21 00:20:38.822 lat (usec): min=196, max=1818, avg=237.83, stdev=25.08 00:20:38.822 clat percentiles (usec): 00:20:38.822 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 206], 00:20:38.822 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:20:38.822 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 265], 00:20:38.822 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 371], 99.95th=[ 469], 00:20:38.822 | 99.99th=[ 766] 00:20:38.822 write: IOPS=753, BW=3012KiB/s (3084kB/s)(176MiB/60000msec); 0 zone resets 00:20:38.822 slat (usec): min=13, max=15111, avg=21.06, stdev=90.13 00:20:38.822 clat (usec): min=135, max=40371k, avg=1066.23, stdev=189930.52 00:20:38.822 lat (usec): min=153, max=40371k, avg=1087.29, stdev=189930.52 00:20:38.822 clat percentiles (usec): 00:20:38.822 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 155], 00:20:38.822 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:20:38.822 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 200], 95.00th=[ 212], 00:20:38.822 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 343], 99.95th=[ 482], 00:20:38.822 | 99.99th=[ 1500] 00:20:38.822 bw ( KiB/s): min= 5544, max=10896, per=100.00%, avg=9059.90, stdev=1197.83, samples=39 00:20:38.822 iops : min= 1386, max= 2724, avg=2264.97, stdev=299.46, samples=39 00:20:38.822 lat (usec) : 4=0.01%, 250=94.45%, 500=5.50%, 750=0.03%, 1000=0.01% 00:20:38.822 lat (msec) : 2=0.01%, >=2000=0.01% 00:20:38.822 cpu : usr=0.61%, sys=2.06%, ctx=90246, majf=0, minf=5 00:20:38.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:38.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:38.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:38.822 issued rwts: total=45056,45181,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:38.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:38.822 00:20:38.822 Run status group 0 (all jobs): 00:20:38.822 READ: bw=3004KiB/s (3076kB/s), 3004KiB/s-3004KiB/s (3076kB/s-3076kB/s), io=176MiB (185MB), run=60000-60000msec 00:20:38.822 WRITE: bw=3012KiB/s (3084kB/s), 3012KiB/s-3012KiB/s (3084kB/s-3084kB/s), io=176MiB (185MB), run=60000-60000msec 00:20:38.822 00:20:38.822 Disk stats (read/write): 00:20:38.822 nvme0n1: ios=44917/45056, merge=0/0, ticks=10284/8151, in_queue=18435, util=99.54% 00:20:38.822 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:38.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:38.822 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:38.822 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:20:38.822 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:38.822 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:38.822 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:38.822 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:38.822 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:20:38.822 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:38.822 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:38.822 nvmf hotplug test: fio successful as expected 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:38.823 rmmod nvme_tcp 00:20:38.823 rmmod nvme_fabrics 00:20:38.823 rmmod nvme_keyring 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 78366 ']' 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 78366 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 78366 ']' 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 78366 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78366 00:20:38.823 killing process with pid 78366 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78366' 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 78366 00:20:38.823 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 78366 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:38.823 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:38.823 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:38.823 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.823 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.823 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:38.823 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.823 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.823 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.823 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:20:38.823 00:20:38.823 real 1m5.807s 00:20:38.823 user 3m55.353s 00:20:38.823 sys 0m21.532s 00:20:38.823 ************************************ 00:20:38.823 END TEST nvmf_initiator_timeout 00:20:38.823 ************************************ 00:20:38.823 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:38.823 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:38.823 01:33:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:20:38.823 01:33:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:20:38.823 ************************************ 00:20:38.823 END TEST nvmf_target_extra 00:20:38.823 ************************************ 00:20:38.823 00:20:38.823 real 7m42.926s 00:20:38.823 user 18m45.954s 00:20:38.823 sys 1m51.652s 00:20:38.823 01:33:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:38.823 01:33:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:38.823 01:33:34 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:38.823 01:33:34 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:38.824 01:33:34 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:38.824 01:33:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:38.824 ************************************ 00:20:38.824 START TEST nvmf_host 00:20:38.824 ************************************ 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:38.824 * Looking for test storage... 00:20:38.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:38.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.824 --rc genhtml_branch_coverage=1 00:20:38.824 --rc genhtml_function_coverage=1 00:20:38.824 --rc genhtml_legend=1 00:20:38.824 --rc geninfo_all_blocks=1 00:20:38.824 --rc geninfo_unexecuted_blocks=1 00:20:38.824 00:20:38.824 ' 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:38.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.824 --rc genhtml_branch_coverage=1 00:20:38.824 --rc genhtml_function_coverage=1 00:20:38.824 --rc genhtml_legend=1 00:20:38.824 --rc geninfo_all_blocks=1 00:20:38.824 --rc geninfo_unexecuted_blocks=1 00:20:38.824 00:20:38.824 ' 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:38.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.824 --rc genhtml_branch_coverage=1 00:20:38.824 --rc genhtml_function_coverage=1 00:20:38.824 --rc genhtml_legend=1 00:20:38.824 --rc geninfo_all_blocks=1 00:20:38.824 --rc geninfo_unexecuted_blocks=1 00:20:38.824 00:20:38.824 ' 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:38.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.824 --rc genhtml_branch_coverage=1 00:20:38.824 --rc genhtml_function_coverage=1 00:20:38.824 --rc genhtml_legend=1 00:20:38.824 --rc geninfo_all_blocks=1 00:20:38.824 --rc geninfo_unexecuted_blocks=1 00:20:38.824 00:20:38.824 ' 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.824 01:33:34 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:38.825 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.825 ************************************ 00:20:38.825 START TEST nvmf_identify 00:20:38.825 ************************************ 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:38.825 * Looking for test storage... 00:20:38.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:38.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.825 --rc genhtml_branch_coverage=1 00:20:38.825 --rc genhtml_function_coverage=1 00:20:38.825 --rc genhtml_legend=1 00:20:38.825 --rc geninfo_all_blocks=1 00:20:38.825 --rc geninfo_unexecuted_blocks=1 00:20:38.825 00:20:38.825 ' 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:38.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.825 --rc genhtml_branch_coverage=1 00:20:38.825 --rc genhtml_function_coverage=1 00:20:38.825 --rc genhtml_legend=1 00:20:38.825 --rc geninfo_all_blocks=1 00:20:38.825 --rc geninfo_unexecuted_blocks=1 00:20:38.825 00:20:38.825 ' 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:38.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.825 --rc genhtml_branch_coverage=1 00:20:38.825 --rc genhtml_function_coverage=1 00:20:38.825 --rc genhtml_legend=1 00:20:38.825 --rc geninfo_all_blocks=1 00:20:38.825 --rc geninfo_unexecuted_blocks=1 00:20:38.825 00:20:38.825 ' 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:38.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.825 --rc genhtml_branch_coverage=1 00:20:38.825 --rc genhtml_function_coverage=1 00:20:38.825 --rc genhtml_legend=1 00:20:38.825 --rc geninfo_all_blocks=1 00:20:38.825 --rc geninfo_unexecuted_blocks=1 00:20:38.825 00:20:38.825 ' 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.825 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:38.826 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:38.826 Cannot find device "nvmf_init_br" 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:38.826 Cannot find device "nvmf_init_br2" 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:38.826 Cannot find device "nvmf_tgt_br" 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.826 Cannot find device "nvmf_tgt_br2" 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:38.826 Cannot find device "nvmf_init_br" 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:38.826 Cannot find device "nvmf_init_br2" 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:38.826 Cannot find device "nvmf_tgt_br" 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:38.826 Cannot find device "nvmf_tgt_br2" 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:38.826 Cannot find device "nvmf_br" 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:20:38.826 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:39.086 Cannot find device "nvmf_init_if" 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:39.086 Cannot find device "nvmf_init_if2" 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:39.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:39.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:39.086 01:33:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:39.086 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:39.086 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:39.086 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:39.086 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:39.086 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:39.086 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:39.086 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:20:39.086 00:20:39.086 --- 10.0.0.3 ping statistics --- 00:20:39.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.086 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:20:39.086 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:39.346 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:39.346 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:20:39.346 00:20:39.346 --- 10.0.0.4 ping statistics --- 00:20:39.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.346 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:39.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:20:39.346 00:20:39.346 --- 10.0.0.1 ping statistics --- 00:20:39.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.346 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:39.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:20:39.346 00:20:39.346 --- 10.0.0.2 ping statistics --- 00:20:39.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.346 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # return 0 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=79374 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 79374 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 79374 ']' 00:20:39.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:39.346 01:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:39.346 [2024-09-28 01:33:35.185543] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:39.346 [2024-09-28 01:33:35.185987] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.605 [2024-09-28 01:33:35.362728] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:39.863 [2024-09-28 01:33:35.594569] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.863 [2024-09-28 01:33:35.594638] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.864 [2024-09-28 01:33:35.594673] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.864 [2024-09-28 01:33:35.594688] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.864 [2024-09-28 01:33:35.594705] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.864 [2024-09-28 01:33:35.594911] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.864 [2024-09-28 01:33:35.595620] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.864 [2024-09-28 01:33:35.595751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.864 [2024-09-28 01:33:35.595763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.864 [2024-09-28 01:33:35.760261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:40.434 [2024-09-28 01:33:36.097271] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:40.434 Malloc0 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:40.434 [2024-09-28 01:33:36.230898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:40.434 [ 00:20:40.434 { 00:20:40.434 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:40.434 "subtype": "Discovery", 00:20:40.434 "listen_addresses": [ 00:20:40.434 { 00:20:40.434 "trtype": "TCP", 00:20:40.434 "adrfam": "IPv4", 00:20:40.434 "traddr": "10.0.0.3", 00:20:40.434 "trsvcid": "4420" 00:20:40.434 } 00:20:40.434 ], 00:20:40.434 "allow_any_host": true, 00:20:40.434 "hosts": [] 00:20:40.434 }, 00:20:40.434 { 00:20:40.434 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.434 "subtype": "NVMe", 00:20:40.434 "listen_addresses": [ 00:20:40.434 { 00:20:40.434 "trtype": "TCP", 00:20:40.434 "adrfam": "IPv4", 00:20:40.434 "traddr": "10.0.0.3", 00:20:40.434 "trsvcid": "4420" 00:20:40.434 } 00:20:40.434 ], 00:20:40.434 "allow_any_host": true, 00:20:40.434 "hosts": [], 00:20:40.434 "serial_number": "SPDK00000000000001", 00:20:40.434 "model_number": "SPDK bdev Controller", 00:20:40.434 "max_namespaces": 32, 00:20:40.434 "min_cntlid": 1, 00:20:40.434 "max_cntlid": 65519, 00:20:40.434 "namespaces": [ 00:20:40.434 { 00:20:40.434 "nsid": 1, 00:20:40.434 "bdev_name": "Malloc0", 00:20:40.434 "name": "Malloc0", 00:20:40.434 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:40.434 "eui64": "ABCDEF0123456789", 00:20:40.434 "uuid": "2c091a42-bb38-4920-8c76-776f18e3f9de" 00:20:40.434 } 00:20:40.434 ] 00:20:40.434 } 00:20:40.434 ] 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.434 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:40.434 [2024-09-28 01:33:36.319492] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:40.434 [2024-09-28 01:33:36.319846] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79409 ] 00:20:40.697 [2024-09-28 01:33:36.486001] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:40.697 [2024-09-28 01:33:36.486137] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:40.697 [2024-09-28 01:33:36.486150] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:40.697 [2024-09-28 01:33:36.486172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:40.697 [2024-09-28 01:33:36.486186] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:40.697 [2024-09-28 01:33:36.486615] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:40.697 [2024-09-28 01:33:36.486690] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:20:40.697 [2024-09-28 01:33:36.491549] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:40.697 [2024-09-28 01:33:36.491599] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:40.697 [2024-09-28 01:33:36.491610] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:40.697 [2024-09-28 01:33:36.491618] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:40.697 [2024-09-28 01:33:36.491695] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.697 [2024-09-28 01:33:36.491715] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.697 [2024-09-28 01:33:36.491724] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.697 [2024-09-28 01:33:36.491750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:40.697 [2024-09-28 01:33:36.491793] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.697 [2024-09-28 01:33:36.499533] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.697 [2024-09-28 01:33:36.499579] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.697 [2024-09-28 01:33:36.499587] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.697 [2024-09-28 01:33:36.499596] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.697 [2024-09-28 01:33:36.499618] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:40.697 [2024-09-28 01:33:36.499634] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:40.697 [2024-09-28 01:33:36.499645] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:40.697 [2024-09-28 01:33:36.499680] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.697 [2024-09-28 01:33:36.499690] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.697 [2024-09-28 01:33:36.499697] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.697 [2024-09-28 01:33:36.499713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.697 [2024-09-28 01:33:36.499747] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.697 [2024-09-28 01:33:36.499841] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.697 [2024-09-28 01:33:36.499854] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.697 [2024-09-28 01:33:36.499860] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.697 [2024-09-28 01:33:36.499868] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.697 [2024-09-28 01:33:36.499900] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:40.697 [2024-09-28 01:33:36.499922] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:40.697 [2024-09-28 01:33:36.499937] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.697 [2024-09-28 01:33:36.499945] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.697 [2024-09-28 01:33:36.499951] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.697 [2024-09-28 01:33:36.499977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.697 [2024-09-28 01:33:36.500010] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.697 [2024-09-28 01:33:36.500090] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.697 [2024-09-28 01:33:36.500102] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.697 [2024-09-28 01:33:36.500108] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.697 [2024-09-28 01:33:36.500115] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.697 [2024-09-28 01:33:36.500124] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:40.697 [2024-09-28 01:33:36.500138] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:40.697 [2024-09-28 01:33:36.500154] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.697 [2024-09-28 01:33:36.500164] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.697 [2024-09-28 01:33:36.500171] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.697 [2024-09-28 01:33:36.500184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.697 [2024-09-28 01:33:36.500210] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.697 [2024-09-28 01:33:36.500279] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.697 [2024-09-28 01:33:36.500294] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.697 [2024-09-28 01:33:36.500301] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.697 [2024-09-28 01:33:36.500307] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.697 [2024-09-28 01:33:36.500316] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:40.697 [2024-09-28 01:33:36.500333] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.697 [2024-09-28 01:33:36.500349] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.697 [2024-09-28 01:33:36.500356] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.697 [2024-09-28 01:33:36.500369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.697 [2024-09-28 01:33:36.500401] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.697 [2024-09-28 01:33:36.500486] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.697 [2024-09-28 01:33:36.500499] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.697 [2024-09-28 01:33:36.500505] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.697 [2024-09-28 01:33:36.500511] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.697 [2024-09-28 01:33:36.500520] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:40.698 [2024-09-28 01:33:36.500534] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:40.698 [2024-09-28 01:33:36.500548] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:40.698 [2024-09-28 01:33:36.500657] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:40.698 [2024-09-28 01:33:36.500670] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:40.698 [2024-09-28 01:33:36.500684] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.500698] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.500706] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.698 [2024-09-28 01:33:36.500719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.698 [2024-09-28 01:33:36.500750] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.698 [2024-09-28 01:33:36.500818] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.698 [2024-09-28 01:33:36.500835] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.698 [2024-09-28 01:33:36.500842] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.500848] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.698 [2024-09-28 01:33:36.500857] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:40.698 [2024-09-28 01:33:36.500874] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.500882] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.500888] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.698 [2024-09-28 01:33:36.500901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.698 [2024-09-28 01:33:36.500926] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.698 [2024-09-28 01:33:36.500985] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.698 [2024-09-28 01:33:36.500996] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.698 [2024-09-28 01:33:36.501001] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501007] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.698 [2024-09-28 01:33:36.501016] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:40.698 [2024-09-28 01:33:36.501024] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:40.698 [2024-09-28 01:33:36.501048] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:40.698 [2024-09-28 01:33:36.501068] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:40.698 [2024-09-28 01:33:36.501089] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501097] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.698 [2024-09-28 01:33:36.501114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.698 [2024-09-28 01:33:36.501143] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.698 [2024-09-28 01:33:36.501258] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.698 [2024-09-28 01:33:36.501284] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.698 [2024-09-28 01:33:36.501291] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501299] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:20:40.698 [2024-09-28 01:33:36.501307] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:40.698 [2024-09-28 01:33:36.501315] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501328] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501335] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501348] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.698 [2024-09-28 01:33:36.501357] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.698 [2024-09-28 01:33:36.501365] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501372] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.698 [2024-09-28 01:33:36.501389] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:40.698 [2024-09-28 01:33:36.501401] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:40.698 [2024-09-28 01:33:36.501409] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:40.698 [2024-09-28 01:33:36.501417] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:40.698 [2024-09-28 01:33:36.501424] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:40.698 [2024-09-28 01:33:36.501433] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:40.698 [2024-09-28 01:33:36.501460] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:40.698 [2024-09-28 01:33:36.501490] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501499] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501506] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.698 [2024-09-28 01:33:36.501534] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:40.698 [2024-09-28 01:33:36.501564] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.698 [2024-09-28 01:33:36.501642] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.698 [2024-09-28 01:33:36.501656] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.698 [2024-09-28 01:33:36.501663] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501669] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.698 [2024-09-28 01:33:36.501685] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501693] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501702] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.698 [2024-09-28 01:33:36.501718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.698 [2024-09-28 01:33:36.501729] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501735] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501741] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:20:40.698 [2024-09-28 01:33:36.501751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.698 [2024-09-28 01:33:36.501760] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501766] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501771] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:20:40.698 [2024-09-28 01:33:36.501781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.698 [2024-09-28 01:33:36.501789] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501795] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501804] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.698 [2024-09-28 01:33:36.501829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.698 [2024-09-28 01:33:36.501837] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:40.698 [2024-09-28 01:33:36.501850] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:40.698 [2024-09-28 01:33:36.501864] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.501871] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:40.698 [2024-09-28 01:33:36.501886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.698 [2024-09-28 01:33:36.501921] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.698 [2024-09-28 01:33:36.501932] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:20:40.698 [2024-09-28 01:33:36.501939] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:20:40.698 [2024-09-28 01:33:36.501946] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.698 [2024-09-28 01:33:36.501953] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:40.698 [2024-09-28 01:33:36.502053] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.698 [2024-09-28 01:33:36.502066] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.698 [2024-09-28 01:33:36.502071] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.502078] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:40.698 [2024-09-28 01:33:36.502090] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:40.698 [2024-09-28 01:33:36.502099] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:40.698 [2024-09-28 01:33:36.502121] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.698 [2024-09-28 01:33:36.502130] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:40.698 [2024-09-28 01:33:36.502143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.698 [2024-09-28 01:33:36.502172] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:40.698 [2024-09-28 01:33:36.502257] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.699 [2024-09-28 01:33:36.502269] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.699 [2024-09-28 01:33:36.502278] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.502285] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:40.699 [2024-09-28 01:33:36.502293] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:40.699 [2024-09-28 01:33:36.502300] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.502312] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.502319] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.502333] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.699 [2024-09-28 01:33:36.502343] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.699 [2024-09-28 01:33:36.502348] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.502355] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:40.699 [2024-09-28 01:33:36.502383] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:40.699 [2024-09-28 01:33:36.502437] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.502497] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:40.699 [2024-09-28 01:33:36.502512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.699 [2024-09-28 01:33:36.502524] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.502532] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.502538] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:40.699 [2024-09-28 01:33:36.502556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.699 [2024-09-28 01:33:36.502589] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:40.699 [2024-09-28 01:33:36.502601] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:40.699 [2024-09-28 01:33:36.502914] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.699 [2024-09-28 01:33:36.502938] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.699 [2024-09-28 01:33:36.502946] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.502956] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:20:40.699 [2024-09-28 01:33:36.502964] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:20:40.699 [2024-09-28 01:33:36.502972] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.502987] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.502994] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.503003] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.699 [2024-09-28 01:33:36.503011] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.699 [2024-09-28 01:33:36.503017] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.503023] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:40.699 [2024-09-28 01:33:36.503073] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.699 [2024-09-28 01:33:36.503086] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.699 [2024-09-28 01:33:36.503091] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.503103] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:40.699 [2024-09-28 01:33:36.503136] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.503149] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:40.699 [2024-09-28 01:33:36.503162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.699 [2024-09-28 01:33:36.503205] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:40.699 [2024-09-28 01:33:36.503314] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.699 [2024-09-28 01:33:36.503332] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.699 [2024-09-28 01:33:36.503338] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.503345] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:20:40.699 [2024-09-28 01:33:36.503352] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:20:40.699 [2024-09-28 01:33:36.503376] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.503387] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.503393] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.503405] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.699 [2024-09-28 01:33:36.503414] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.699 [2024-09-28 01:33:36.503419] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.503425] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:40.699 [2024-09-28 01:33:36.503447] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.507499] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:40.699 [2024-09-28 01:33:36.507552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.699 [2024-09-28 01:33:36.507598] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:40.699 [2024-09-28 01:33:36.507710] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.699 [2024-09-28 01:33:36.507723] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.699 [2024-09-28 01:33:36.507728] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.507734] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:20:40.699 [2024-09-28 01:33:36.507742] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:20:40.699 [2024-09-28 01:33:36.507748] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.507759] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.507765] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.507792] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.699 [2024-09-28 01:33:36.507803] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.699 [2024-09-28 01:33:36.507809] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.699 [2024-09-28 01:33:36.507815] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:40.699 ===================================================== 00:20:40.699 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:40.699 ===================================================== 00:20:40.699 Controller Capabilities/Features 00:20:40.699 ================================ 00:20:40.699 Vendor ID: 0000 00:20:40.699 Subsystem Vendor ID: 0000 00:20:40.699 Serial Number: .................... 00:20:40.699 Model Number: ........................................ 00:20:40.699 Firmware Version: 25.01 00:20:40.699 Recommended Arb Burst: 0 00:20:40.699 IEEE OUI Identifier: 00 00 00 00:20:40.699 Multi-path I/O 00:20:40.699 May have multiple subsystem ports: No 00:20:40.699 May have multiple controllers: No 00:20:40.699 Associated with SR-IOV VF: No 00:20:40.699 Max Data Transfer Size: 131072 00:20:40.699 Max Number of Namespaces: 0 00:20:40.699 Max Number of I/O Queues: 1024 00:20:40.699 NVMe Specification Version (VS): 1.3 00:20:40.699 NVMe Specification Version (Identify): 1.3 00:20:40.699 Maximum Queue Entries: 128 00:20:40.699 Contiguous Queues Required: Yes 00:20:40.699 Arbitration Mechanisms Supported 00:20:40.699 Weighted Round Robin: Not Supported 00:20:40.699 Vendor Specific: Not Supported 00:20:40.699 Reset Timeout: 15000 ms 00:20:40.699 Doorbell Stride: 4 bytes 00:20:40.699 NVM Subsystem Reset: Not Supported 00:20:40.699 Command Sets Supported 00:20:40.699 NVM Command Set: Supported 00:20:40.699 Boot Partition: Not Supported 00:20:40.699 Memory Page Size Minimum: 4096 bytes 00:20:40.699 Memory Page Size Maximum: 4096 bytes 00:20:40.699 Persistent Memory Region: Not Supported 00:20:40.699 Optional Asynchronous Events Supported 00:20:40.699 Namespace Attribute Notices: Not Supported 00:20:40.699 Firmware Activation Notices: Not Supported 00:20:40.699 ANA Change Notices: Not Supported 00:20:40.699 PLE Aggregate Log Change Notices: Not Supported 00:20:40.699 LBA Status Info Alert Notices: Not Supported 00:20:40.699 EGE Aggregate Log Change Notices: Not Supported 00:20:40.699 Normal NVM Subsystem Shutdown event: Not Supported 00:20:40.699 Zone Descriptor Change Notices: Not Supported 00:20:40.699 Discovery Log Change Notices: Supported 00:20:40.699 Controller Attributes 00:20:40.699 128-bit Host Identifier: Not Supported 00:20:40.699 Non-Operational Permissive Mode: Not Supported 00:20:40.699 NVM Sets: Not Supported 00:20:40.699 Read Recovery Levels: Not Supported 00:20:40.699 Endurance Groups: Not Supported 00:20:40.699 Predictable Latency Mode: Not Supported 00:20:40.699 Traffic Based Keep ALive: Not Supported 00:20:40.699 Namespace Granularity: Not Supported 00:20:40.699 SQ Associations: Not Supported 00:20:40.699 UUID List: Not Supported 00:20:40.699 Multi-Domain Subsystem: Not Supported 00:20:40.699 Fixed Capacity Management: Not Supported 00:20:40.699 Variable Capacity Management: Not Supported 00:20:40.699 Delete Endurance Group: Not Supported 00:20:40.699 Delete NVM Set: Not Supported 00:20:40.699 Extended LBA Formats Supported: Not Supported 00:20:40.699 Flexible Data Placement Supported: Not Supported 00:20:40.699 00:20:40.699 Controller Memory Buffer Support 00:20:40.699 ================================ 00:20:40.699 Supported: No 00:20:40.699 00:20:40.700 Persistent Memory Region Support 00:20:40.700 ================================ 00:20:40.700 Supported: No 00:20:40.700 00:20:40.700 Admin Command Set Attributes 00:20:40.700 ============================ 00:20:40.700 Security Send/Receive: Not Supported 00:20:40.700 Format NVM: Not Supported 00:20:40.700 Firmware Activate/Download: Not Supported 00:20:40.700 Namespace Management: Not Supported 00:20:40.700 Device Self-Test: Not Supported 00:20:40.700 Directives: Not Supported 00:20:40.700 NVMe-MI: Not Supported 00:20:40.700 Virtualization Management: Not Supported 00:20:40.700 Doorbell Buffer Config: Not Supported 00:20:40.700 Get LBA Status Capability: Not Supported 00:20:40.700 Command & Feature Lockdown Capability: Not Supported 00:20:40.700 Abort Command Limit: 1 00:20:40.700 Async Event Request Limit: 4 00:20:40.700 Number of Firmware Slots: N/A 00:20:40.700 Firmware Slot 1 Read-Only: N/A 00:20:40.700 Firmware Activation Without Reset: N/A 00:20:40.700 Multiple Update Detection Support: N/A 00:20:40.700 Firmware Update Granularity: No Information Provided 00:20:40.700 Per-Namespace SMART Log: No 00:20:40.700 Asymmetric Namespace Access Log Page: Not Supported 00:20:40.700 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:40.700 Command Effects Log Page: Not Supported 00:20:40.700 Get Log Page Extended Data: Supported 00:20:40.700 Telemetry Log Pages: Not Supported 00:20:40.700 Persistent Event Log Pages: Not Supported 00:20:40.700 Supported Log Pages Log Page: May Support 00:20:40.700 Commands Supported & Effects Log Page: Not Supported 00:20:40.700 Feature Identifiers & Effects Log Page:May Support 00:20:40.700 NVMe-MI Commands & Effects Log Page: May Support 00:20:40.700 Data Area 4 for Telemetry Log: Not Supported 00:20:40.700 Error Log Page Entries Supported: 128 00:20:40.700 Keep Alive: Not Supported 00:20:40.700 00:20:40.700 NVM Command Set Attributes 00:20:40.700 ========================== 00:20:40.700 Submission Queue Entry Size 00:20:40.700 Max: 1 00:20:40.700 Min: 1 00:20:40.700 Completion Queue Entry Size 00:20:40.700 Max: 1 00:20:40.700 Min: 1 00:20:40.700 Number of Namespaces: 0 00:20:40.700 Compare Command: Not Supported 00:20:40.700 Write Uncorrectable Command: Not Supported 00:20:40.700 Dataset Management Command: Not Supported 00:20:40.700 Write Zeroes Command: Not Supported 00:20:40.700 Set Features Save Field: Not Supported 00:20:40.700 Reservations: Not Supported 00:20:40.700 Timestamp: Not Supported 00:20:40.700 Copy: Not Supported 00:20:40.700 Volatile Write Cache: Not Present 00:20:40.700 Atomic Write Unit (Normal): 1 00:20:40.700 Atomic Write Unit (PFail): 1 00:20:40.700 Atomic Compare & Write Unit: 1 00:20:40.700 Fused Compare & Write: Supported 00:20:40.700 Scatter-Gather List 00:20:40.700 SGL Command Set: Supported 00:20:40.700 SGL Keyed: Supported 00:20:40.700 SGL Bit Bucket Descriptor: Not Supported 00:20:40.700 SGL Metadata Pointer: Not Supported 00:20:40.700 Oversized SGL: Not Supported 00:20:40.700 SGL Metadata Address: Not Supported 00:20:40.700 SGL Offset: Supported 00:20:40.700 Transport SGL Data Block: Not Supported 00:20:40.700 Replay Protected Memory Block: Not Supported 00:20:40.700 00:20:40.700 Firmware Slot Information 00:20:40.700 ========================= 00:20:40.700 Active slot: 0 00:20:40.700 00:20:40.700 00:20:40.700 Error Log 00:20:40.700 ========= 00:20:40.700 00:20:40.700 Active Namespaces 00:20:40.700 ================= 00:20:40.700 Discovery Log Page 00:20:40.700 ================== 00:20:40.700 Generation Counter: 2 00:20:40.700 Number of Records: 2 00:20:40.700 Record Format: 0 00:20:40.700 00:20:40.700 Discovery Log Entry 0 00:20:40.700 ---------------------- 00:20:40.700 Transport Type: 3 (TCP) 00:20:40.700 Address Family: 1 (IPv4) 00:20:40.700 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:40.700 Entry Flags: 00:20:40.700 Duplicate Returned Information: 1 00:20:40.700 Explicit Persistent Connection Support for Discovery: 1 00:20:40.700 Transport Requirements: 00:20:40.700 Secure Channel: Not Required 00:20:40.700 Port ID: 0 (0x0000) 00:20:40.700 Controller ID: 65535 (0xffff) 00:20:40.700 Admin Max SQ Size: 128 00:20:40.700 Transport Service Identifier: 4420 00:20:40.700 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:40.700 Transport Address: 10.0.0.3 00:20:40.700 Discovery Log Entry 1 00:20:40.700 ---------------------- 00:20:40.700 Transport Type: 3 (TCP) 00:20:40.700 Address Family: 1 (IPv4) 00:20:40.700 Subsystem Type: 2 (NVM Subsystem) 00:20:40.700 Entry Flags: 00:20:40.700 Duplicate Returned Information: 0 00:20:40.700 Explicit Persistent Connection Support for Discovery: 0 00:20:40.700 Transport Requirements: 00:20:40.700 Secure Channel: Not Required 00:20:40.700 Port ID: 0 (0x0000) 00:20:40.700 Controller ID: 65535 (0xffff) 00:20:40.700 Admin Max SQ Size: 128 00:20:40.700 Transport Service Identifier: 4420 00:20:40.700 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:40.700 Transport Address: 10.0.0.3 [2024-09-28 01:33:36.508016] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:40.700 [2024-09-28 01:33:36.508040] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.700 [2024-09-28 01:33:36.508053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.700 [2024-09-28 01:33:36.508062] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:20:40.700 [2024-09-28 01:33:36.508070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.700 [2024-09-28 01:33:36.508082] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:20:40.700 [2024-09-28 01:33:36.508091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.700 [2024-09-28 01:33:36.508098] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.700 [2024-09-28 01:33:36.508106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.700 [2024-09-28 01:33:36.508123] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.700 [2024-09-28 01:33:36.508131] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.700 [2024-09-28 01:33:36.508138] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.700 [2024-09-28 01:33:36.508155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.700 [2024-09-28 01:33:36.508188] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.700 [2024-09-28 01:33:36.508273] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.700 [2024-09-28 01:33:36.508286] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.700 [2024-09-28 01:33:36.508292] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.700 [2024-09-28 01:33:36.508299] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.700 [2024-09-28 01:33:36.508313] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.700 [2024-09-28 01:33:36.508320] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.700 [2024-09-28 01:33:36.508327] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.700 [2024-09-28 01:33:36.508343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.700 [2024-09-28 01:33:36.508376] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.700 [2024-09-28 01:33:36.508474] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.700 [2024-09-28 01:33:36.508489] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.700 [2024-09-28 01:33:36.508495] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.700 [2024-09-28 01:33:36.508501] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.700 [2024-09-28 01:33:36.508509] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:40.700 [2024-09-28 01:33:36.508518] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:40.700 [2024-09-28 01:33:36.508534] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.700 [2024-09-28 01:33:36.508542] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.700 [2024-09-28 01:33:36.508552] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.700 [2024-09-28 01:33:36.508568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.700 [2024-09-28 01:33:36.508599] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.700 [2024-09-28 01:33:36.508658] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.700 [2024-09-28 01:33:36.508668] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.700 [2024-09-28 01:33:36.508674] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.700 [2024-09-28 01:33:36.508680] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.700 [2024-09-28 01:33:36.508707] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.700 [2024-09-28 01:33:36.508716] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.700 [2024-09-28 01:33:36.508722] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.700 [2024-09-28 01:33:36.508734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.701 [2024-09-28 01:33:36.508760] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.701 [2024-09-28 01:33:36.508819] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.701 [2024-09-28 01:33:36.508831] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.701 [2024-09-28 01:33:36.508839] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.508845] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.701 [2024-09-28 01:33:36.508862] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.508869] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.508875] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.701 [2024-09-28 01:33:36.508887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.701 [2024-09-28 01:33:36.508912] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.701 [2024-09-28 01:33:36.508980] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.701 [2024-09-28 01:33:36.508991] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.701 [2024-09-28 01:33:36.508997] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509003] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.701 [2024-09-28 01:33:36.509019] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509026] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509032] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.701 [2024-09-28 01:33:36.509044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.701 [2024-09-28 01:33:36.509068] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.701 [2024-09-28 01:33:36.509131] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.701 [2024-09-28 01:33:36.509142] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.701 [2024-09-28 01:33:36.509148] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509154] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.701 [2024-09-28 01:33:36.509170] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509177] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509183] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.701 [2024-09-28 01:33:36.509195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.701 [2024-09-28 01:33:36.509224] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.701 [2024-09-28 01:33:36.509311] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.701 [2024-09-28 01:33:36.509321] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.701 [2024-09-28 01:33:36.509327] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509337] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.701 [2024-09-28 01:33:36.509354] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509361] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509367] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.701 [2024-09-28 01:33:36.509379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.701 [2024-09-28 01:33:36.509402] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.701 [2024-09-28 01:33:36.509484] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.701 [2024-09-28 01:33:36.509502] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.701 [2024-09-28 01:33:36.509509] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509516] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.701 [2024-09-28 01:33:36.509533] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509541] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509547] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.701 [2024-09-28 01:33:36.509559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.701 [2024-09-28 01:33:36.509585] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.701 [2024-09-28 01:33:36.509644] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.701 [2024-09-28 01:33:36.509655] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.701 [2024-09-28 01:33:36.509661] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509667] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.701 [2024-09-28 01:33:36.509686] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509695] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509701] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.701 [2024-09-28 01:33:36.509713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.701 [2024-09-28 01:33:36.509738] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.701 [2024-09-28 01:33:36.509800] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.701 [2024-09-28 01:33:36.509811] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.701 [2024-09-28 01:33:36.509816] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509823] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.701 [2024-09-28 01:33:36.509839] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509860] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509866] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.701 [2024-09-28 01:33:36.509883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.701 [2024-09-28 01:33:36.509908] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.701 [2024-09-28 01:33:36.509970] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.701 [2024-09-28 01:33:36.509986] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.701 [2024-09-28 01:33:36.509993] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.509999] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.701 [2024-09-28 01:33:36.510019] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.510027] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.510033] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.701 [2024-09-28 01:33:36.510045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.701 [2024-09-28 01:33:36.510069] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.701 [2024-09-28 01:33:36.510136] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.701 [2024-09-28 01:33:36.510152] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.701 [2024-09-28 01:33:36.510158] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.510167] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.701 [2024-09-28 01:33:36.510184] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.510192] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.510198] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.701 [2024-09-28 01:33:36.510210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.701 [2024-09-28 01:33:36.510239] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.701 [2024-09-28 01:33:36.510310] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.701 [2024-09-28 01:33:36.510321] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.701 [2024-09-28 01:33:36.510327] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.701 [2024-09-28 01:33:36.510333] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.701 [2024-09-28 01:33:36.510349] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.510356] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.510362] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.702 [2024-09-28 01:33:36.510374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-09-28 01:33:36.510397] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.702 [2024-09-28 01:33:36.510487] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.702 [2024-09-28 01:33:36.510501] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.702 [2024-09-28 01:33:36.510506] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.510513] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.702 [2024-09-28 01:33:36.510530] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.510538] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.510544] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.702 [2024-09-28 01:33:36.510556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-09-28 01:33:36.510582] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.702 [2024-09-28 01:33:36.510645] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.702 [2024-09-28 01:33:36.510656] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.702 [2024-09-28 01:33:36.510662] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.510668] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.702 [2024-09-28 01:33:36.510684] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.510692] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.510698] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.702 [2024-09-28 01:33:36.510715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-09-28 01:33:36.510740] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.702 [2024-09-28 01:33:36.510807] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.702 [2024-09-28 01:33:36.510833] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.702 [2024-09-28 01:33:36.510839] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.510845] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.702 [2024-09-28 01:33:36.510863] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.510871] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.510877] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.702 [2024-09-28 01:33:36.510888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-09-28 01:33:36.510912] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.702 [2024-09-28 01:33:36.510973] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.702 [2024-09-28 01:33:36.510986] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.702 [2024-09-28 01:33:36.510992] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.510998] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.702 [2024-09-28 01:33:36.511014] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.511021] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.511027] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.702 [2024-09-28 01:33:36.511063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-09-28 01:33:36.511090] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.702 [2024-09-28 01:33:36.511147] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.702 [2024-09-28 01:33:36.511159] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.702 [2024-09-28 01:33:36.511165] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.511171] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.702 [2024-09-28 01:33:36.511187] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.511195] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.511201] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.702 [2024-09-28 01:33:36.511216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-09-28 01:33:36.511243] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.702 [2024-09-28 01:33:36.511307] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.702 [2024-09-28 01:33:36.511320] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.702 [2024-09-28 01:33:36.511326] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.511332] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.702 [2024-09-28 01:33:36.511349] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.511366] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.511372] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.702 [2024-09-28 01:33:36.511401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-09-28 01:33:36.511426] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.702 [2024-09-28 01:33:36.515515] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.702 [2024-09-28 01:33:36.515542] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.702 [2024-09-28 01:33:36.515550] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.515557] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.702 [2024-09-28 01:33:36.515578] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.515587] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.515593] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.702 [2024-09-28 01:33:36.515606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-09-28 01:33:36.515637] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.702 [2024-09-28 01:33:36.515705] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.702 [2024-09-28 01:33:36.515718] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.702 [2024-09-28 01:33:36.515724] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.702 [2024-09-28 01:33:36.515730] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.702 [2024-09-28 01:33:36.515744] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:40.702 00:20:40.702 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:40.702 [2024-09-28 01:33:36.621858] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:40.702 [2024-09-28 01:33:36.622181] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79412 ] 00:20:40.964 [2024-09-28 01:33:36.789590] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:40.964 [2024-09-28 01:33:36.789732] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:40.964 [2024-09-28 01:33:36.789745] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:40.964 [2024-09-28 01:33:36.789768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:40.964 [2024-09-28 01:33:36.789783] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:40.964 [2024-09-28 01:33:36.790162] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:40.964 [2024-09-28 01:33:36.790230] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:20:40.964 [2024-09-28 01:33:36.794587] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:40.964 [2024-09-28 01:33:36.794639] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:40.964 [2024-09-28 01:33:36.794650] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:40.964 [2024-09-28 01:33:36.794660] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:40.964 [2024-09-28 01:33:36.794745] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.964 [2024-09-28 01:33:36.794765] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.964 [2024-09-28 01:33:36.794774] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.964 [2024-09-28 01:33:36.794815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:40.964 [2024-09-28 01:33:36.794873] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.964 [2024-09-28 01:33:36.801643] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.964 [2024-09-28 01:33:36.801672] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.964 [2024-09-28 01:33:36.801680] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.964 [2024-09-28 01:33:36.801689] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.964 [2024-09-28 01:33:36.801727] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:40.964 [2024-09-28 01:33:36.801745] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:40.964 [2024-09-28 01:33:36.801772] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:40.964 [2024-09-28 01:33:36.801799] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.964 [2024-09-28 01:33:36.801809] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.964 [2024-09-28 01:33:36.801817] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.964 [2024-09-28 01:33:36.801848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.964 [2024-09-28 01:33:36.801913] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.964 [2024-09-28 01:33:36.801997] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.964 [2024-09-28 01:33:36.802009] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.964 [2024-09-28 01:33:36.802016] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.964 [2024-09-28 01:33:36.802023] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.964 [2024-09-28 01:33:36.802033] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:40.964 [2024-09-28 01:33:36.802049] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:40.964 [2024-09-28 01:33:36.802072] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.964 [2024-09-28 01:33:36.802080] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.964 [2024-09-28 01:33:36.802087] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.964 [2024-09-28 01:33:36.802104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.964 [2024-09-28 01:33:36.802134] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.964 [2024-09-28 01:33:36.802200] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.964 [2024-09-28 01:33:36.802212] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.964 [2024-09-28 01:33:36.802217] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.964 [2024-09-28 01:33:36.802224] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.964 [2024-09-28 01:33:36.802234] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:40.964 [2024-09-28 01:33:36.802247] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:40.964 [2024-09-28 01:33:36.802262] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.964 [2024-09-28 01:33:36.802289] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.964 [2024-09-28 01:33:36.802295] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.964 [2024-09-28 01:33:36.802310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.964 [2024-09-28 01:33:36.802337] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.964 [2024-09-28 01:33:36.802415] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.964 [2024-09-28 01:33:36.802426] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.964 [2024-09-28 01:33:36.802434] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.964 [2024-09-28 01:33:36.802441] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.964 [2024-09-28 01:33:36.802466] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:40.964 [2024-09-28 01:33:36.802499] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.964 [2024-09-28 01:33:36.802508] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.964 [2024-09-28 01:33:36.802515] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.964 [2024-09-28 01:33:36.802529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.964 [2024-09-28 01:33:36.802557] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.964 [2024-09-28 01:33:36.803004] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.964 [2024-09-28 01:33:36.803026] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.964 [2024-09-28 01:33:36.803071] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.964 [2024-09-28 01:33:36.803079] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.964 [2024-09-28 01:33:36.803089] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:40.964 [2024-09-28 01:33:36.803104] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:40.964 [2024-09-28 01:33:36.803121] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:40.964 [2024-09-28 01:33:36.803236] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:40.964 [2024-09-28 01:33:36.803244] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:40.964 [2024-09-28 01:33:36.803260] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.964 [2024-09-28 01:33:36.803268] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.964 [2024-09-28 01:33:36.803276] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.965 [2024-09-28 01:33:36.803291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.965 [2024-09-28 01:33:36.803324] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.965 [2024-09-28 01:33:36.803708] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.965 [2024-09-28 01:33:36.803733] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.965 [2024-09-28 01:33:36.803741] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.803748] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.965 [2024-09-28 01:33:36.803759] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:40.965 [2024-09-28 01:33:36.803782] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.803792] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.803799] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.965 [2024-09-28 01:33:36.803828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.965 [2024-09-28 01:33:36.803891] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.965 [2024-09-28 01:33:36.804283] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.965 [2024-09-28 01:33:36.804315] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.965 [2024-09-28 01:33:36.804322] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.804341] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.965 [2024-09-28 01:33:36.804366] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:40.965 [2024-09-28 01:33:36.804387] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:40.965 [2024-09-28 01:33:36.804412] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:40.965 [2024-09-28 01:33:36.804429] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:40.965 [2024-09-28 01:33:36.804452] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.804480] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.965 [2024-09-28 01:33:36.804499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.965 [2024-09-28 01:33:36.804554] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.965 [2024-09-28 01:33:36.805128] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.965 [2024-09-28 01:33:36.805154] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.965 [2024-09-28 01:33:36.805161] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.805169] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:20:40.965 [2024-09-28 01:33:36.805178] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:40.965 [2024-09-28 01:33:36.805186] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.805199] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.805207] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.805220] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.965 [2024-09-28 01:33:36.805230] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.965 [2024-09-28 01:33:36.805235] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.805242] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.965 [2024-09-28 01:33:36.805283] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:40.965 [2024-09-28 01:33:36.805293] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:40.965 [2024-09-28 01:33:36.805301] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:40.965 [2024-09-28 01:33:36.805312] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:40.965 [2024-09-28 01:33:36.805320] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:40.965 [2024-09-28 01:33:36.805346] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:40.965 [2024-09-28 01:33:36.805362] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:40.965 [2024-09-28 01:33:36.805375] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.805384] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.805391] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.965 [2024-09-28 01:33:36.805407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:40.965 [2024-09-28 01:33:36.805440] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.965 [2024-09-28 01:33:36.812562] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.965 [2024-09-28 01:33:36.812595] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.965 [2024-09-28 01:33:36.812603] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.812615] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.965 [2024-09-28 01:33:36.812636] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.812648] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.812656] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:40.965 [2024-09-28 01:33:36.812673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.965 [2024-09-28 01:33:36.812685] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.812692] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.812698] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:20:40.965 [2024-09-28 01:33:36.812711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.965 [2024-09-28 01:33:36.812721] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.812728] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.812734] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:20:40.965 [2024-09-28 01:33:36.812744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.965 [2024-09-28 01:33:36.812753] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.812759] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.812766] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.965 [2024-09-28 01:33:36.812776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.965 [2024-09-28 01:33:36.812801] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:40.965 [2024-09-28 01:33:36.812852] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:40.965 [2024-09-28 01:33:36.812899] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.812906] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:40.965 [2024-09-28 01:33:36.812918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.965 [2024-09-28 01:33:36.812955] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:40.965 [2024-09-28 01:33:36.812967] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:20:40.965 [2024-09-28 01:33:36.812974] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:20:40.965 [2024-09-28 01:33:36.812980] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.965 [2024-09-28 01:33:36.812987] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:40.965 [2024-09-28 01:33:36.813400] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.965 [2024-09-28 01:33:36.813420] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.965 [2024-09-28 01:33:36.813438] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.813445] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:40.965 [2024-09-28 01:33:36.813499] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:40.965 [2024-09-28 01:33:36.813515] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:40.965 [2024-09-28 01:33:36.813549] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:40.965 [2024-09-28 01:33:36.813561] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:40.965 [2024-09-28 01:33:36.813573] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.813581] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.813588] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:40.965 [2024-09-28 01:33:36.813605] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:40.965 [2024-09-28 01:33:36.813652] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:40.965 [2024-09-28 01:33:36.814018] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.965 [2024-09-28 01:33:36.814039] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.965 [2024-09-28 01:33:36.814046] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.814053] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:40.965 [2024-09-28 01:33:36.814135] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:40.965 [2024-09-28 01:33:36.814160] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:40.965 [2024-09-28 01:33:36.814177] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.965 [2024-09-28 01:33:36.814189] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:40.966 [2024-09-28 01:33:36.814202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.966 [2024-09-28 01:33:36.814230] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:40.966 [2024-09-28 01:33:36.814758] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.966 [2024-09-28 01:33:36.814776] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.966 [2024-09-28 01:33:36.814797] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.814805] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:40.966 [2024-09-28 01:33:36.814827] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:40.966 [2024-09-28 01:33:36.814835] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.814864] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.814887] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.814898] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.966 [2024-09-28 01:33:36.814909] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.966 [2024-09-28 01:33:36.814915] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.814922] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:40.966 [2024-09-28 01:33:36.814954] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:40.966 [2024-09-28 01:33:36.814974] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:40.966 [2024-09-28 01:33:36.815007] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:40.966 [2024-09-28 01:33:36.815023] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.815031] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:40.966 [2024-09-28 01:33:36.815074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.966 [2024-09-28 01:33:36.815111] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:40.966 [2024-09-28 01:33:36.815225] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.966 [2024-09-28 01:33:36.815238] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.966 [2024-09-28 01:33:36.815244] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.815251] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:40.966 [2024-09-28 01:33:36.815259] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:40.966 [2024-09-28 01:33:36.815270] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.815282] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.815289] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.815302] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.966 [2024-09-28 01:33:36.815312] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.966 [2024-09-28 01:33:36.815318] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.815326] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:40.966 [2024-09-28 01:33:36.815410] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:40.966 [2024-09-28 01:33:36.815444] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:40.966 [2024-09-28 01:33:36.815494] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.815504] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:40.966 [2024-09-28 01:33:36.815520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.966 [2024-09-28 01:33:36.815567] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:40.966 [2024-09-28 01:33:36.815657] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.966 [2024-09-28 01:33:36.815674] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.966 [2024-09-28 01:33:36.815681] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.815688] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:40.966 [2024-09-28 01:33:36.815696] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:40.966 [2024-09-28 01:33:36.815703] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.815715] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.815722] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.815749] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.966 [2024-09-28 01:33:36.815759] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.966 [2024-09-28 01:33:36.815765] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.815772] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:40.966 [2024-09-28 01:33:36.815821] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:40.966 [2024-09-28 01:33:36.815866] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:40.966 [2024-09-28 01:33:36.815893] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:40.966 [2024-09-28 01:33:36.815902] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:40.966 [2024-09-28 01:33:36.815910] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:40.966 [2024-09-28 01:33:36.815919] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:40.966 [2024-09-28 01:33:36.815926] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:40.966 [2024-09-28 01:33:36.815936] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:40.966 [2024-09-28 01:33:36.815944] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:40.966 [2024-09-28 01:33:36.815974] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.815983] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:40.966 [2024-09-28 01:33:36.815996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.966 [2024-09-28 01:33:36.816006] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.816016] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.816022] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:40.966 [2024-09-28 01:33:36.816032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.966 [2024-09-28 01:33:36.816062] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:40.966 [2024-09-28 01:33:36.816075] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:40.966 [2024-09-28 01:33:36.816151] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.966 [2024-09-28 01:33:36.816164] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.966 [2024-09-28 01:33:36.816170] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.816177] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:40.966 [2024-09-28 01:33:36.816188] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.966 [2024-09-28 01:33:36.816196] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.966 [2024-09-28 01:33:36.816200] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.816206] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:40.966 [2024-09-28 01:33:36.816223] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.816230] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:40.966 [2024-09-28 01:33:36.816241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.966 [2024-09-28 01:33:36.816265] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:40.966 [2024-09-28 01:33:36.816326] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.966 [2024-09-28 01:33:36.816336] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.966 [2024-09-28 01:33:36.816341] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.816347] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:40.966 [2024-09-28 01:33:36.816362] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.816369] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:40.966 [2024-09-28 01:33:36.816380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.966 [2024-09-28 01:33:36.816402] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:40.966 [2024-09-28 01:33:36.816544] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.966 [2024-09-28 01:33:36.816561] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.966 [2024-09-28 01:33:36.816567] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.816574] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:40.966 [2024-09-28 01:33:36.816592] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.816603] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:40.966 [2024-09-28 01:33:36.816619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.966 [2024-09-28 01:33:36.816649] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:40.966 [2024-09-28 01:33:36.816714] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.966 [2024-09-28 01:33:36.816730] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.966 [2024-09-28 01:33:36.816736] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.966 [2024-09-28 01:33:36.816743] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:40.967 [2024-09-28 01:33:36.816775] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.816786] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:40.967 [2024-09-28 01:33:36.816801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.967 [2024-09-28 01:33:36.816830] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.816852] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:40.967 [2024-09-28 01:33:36.816877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.967 [2024-09-28 01:33:36.816891] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.816899] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:20:40.967 [2024-09-28 01:33:36.816911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.967 [2024-09-28 01:33:36.816925] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.816932] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:20:40.967 [2024-09-28 01:33:36.816942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.967 [2024-09-28 01:33:36.816968] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:40.967 [2024-09-28 01:33:36.816979] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:40.967 [2024-09-28 01:33:36.816986] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:20:40.967 [2024-09-28 01:33:36.816992] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:20:40.967 [2024-09-28 01:33:36.817181] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.967 [2024-09-28 01:33:36.817193] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.967 [2024-09-28 01:33:36.817199] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.817205] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:20:40.967 [2024-09-28 01:33:36.817213] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:20:40.967 [2024-09-28 01:33:36.817219] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.817249] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.817259] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.817267] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.967 [2024-09-28 01:33:36.817275] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.967 [2024-09-28 01:33:36.817280] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.817286] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:20:40.967 [2024-09-28 01:33:36.817292] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:20:40.967 [2024-09-28 01:33:36.817298] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.817307] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.817312] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.817322] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.967 [2024-09-28 01:33:36.817332] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.967 [2024-09-28 01:33:36.817338] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.817344] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:20:40.967 [2024-09-28 01:33:36.817350] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:20:40.967 [2024-09-28 01:33:36.817356] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.817366] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.817372] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.817380] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.967 [2024-09-28 01:33:36.817387] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.967 [2024-09-28 01:33:36.817392] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.817397] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:20:40.967 [2024-09-28 01:33:36.817406] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:40.967 [2024-09-28 01:33:36.817412] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.817421] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.817426] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.817434] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.967 [2024-09-28 01:33:36.817441] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.967 [2024-09-28 01:33:36.817446] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.967 [2024-09-28 01:33:36.817469] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:40.967 [2024-09-28 01:33:36.817513] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.967 [2024-09-28 01:33:36.817523] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.967 [2024-09-28 01:33:36.817545] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.967 ===================================================== 00:20:40.967 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.967 ===================================================== 00:20:40.967 Controller Capabilities/Features 00:20:40.967 ================================ 00:20:40.967 Vendor ID: 8086 00:20:40.967 Subsystem Vendor ID: 8086 00:20:40.967 Serial Number: SPDK00000000000001 00:20:40.967 Model Number: SPDK bdev Controller 00:20:40.967 Firmware Version: 25.01 00:20:40.967 Recommended Arb Burst: 6 00:20:40.967 IEEE OUI Identifier: e4 d2 5c 00:20:40.967 Multi-path I/O 00:20:40.967 May have multiple subsystem ports: Yes 00:20:40.967 May have multiple controllers: Yes 00:20:40.967 Associated with SR-IOV VF: No 00:20:40.967 Max Data Transfer Size: 131072 00:20:40.967 Max Number of Namespaces: 32 00:20:40.967 Max Number of I/O Queues: 127 00:20:40.967 NVMe Specification Version (VS): 1.3 00:20:40.967 NVMe Specification Version (Identify): 1.3 00:20:40.967 Maximum Queue Entries: 128 00:20:40.967 Contiguous Queues Required: Yes 00:20:40.967 Arbitration Mechanisms Supported 00:20:40.967 Weighted Round Robin: Not Supported 00:20:40.967 Vendor Specific: Not Supported 00:20:40.967 Reset Timeout: 15000 ms 00:20:40.967 Doorbell Stride: 4 bytes 00:20:40.967 NVM Subsystem Reset: Not Supported 00:20:40.967 Command Sets Supported 00:20:40.967 NVM Command Set: Supported 00:20:40.967 Boot Partition: Not Supported 00:20:40.967 Memory Page Size Minimum: 4096 bytes 00:20:40.967 Memory Page Size Maximum: 4096 bytes 00:20:40.967 Persistent Memory Region: Not Supported 00:20:40.967 Optional Asynchronous Events Supported 00:20:40.967 Namespace Attribute Notices: Supported 00:20:40.967 Firmware Activation Notices: Not Supported 00:20:40.967 ANA Change Notices: Not Supported 00:20:40.967 PLE Aggregate Log Change Notices: Not Supported 00:20:40.967 LBA Status Info Alert Notices: Not Supported 00:20:40.967 EGE Aggregate Log Change Notices: Not Supported 00:20:40.967 Normal NVM Subsystem Shutdown event: Not Supported 00:20:40.967 Zone Descriptor Change Notices: Not Supported 00:20:40.967 Discovery Log Change Notices: Not Supported 00:20:40.967 Controller Attributes 00:20:40.967 128-bit Host Identifier: Supported 00:20:40.967 Non-Operational Permissive Mode: Not Supported 00:20:40.967 NVM Sets: Not Supported 00:20:40.967 Read Recovery Levels: Not Supported 00:20:40.967 Endurance Groups: Not Supported 00:20:40.967 Predictable Latency Mode: Not Supported 00:20:40.967 Traffic Based Keep ALive: Not Supported 00:20:40.967 Namespace Granularity: Not Supported 00:20:40.967 SQ Associations: Not Supported 00:20:40.967 UUID List: Not Supported 00:20:40.967 Multi-Domain Subsystem: Not Supported 00:20:40.967 Fixed Capacity Management: Not Supported 00:20:40.967 Variable Capacity Management: Not Supported 00:20:40.967 Delete Endurance Group: Not Supported 00:20:40.967 Delete NVM Set: Not Supported 00:20:40.967 Extended LBA Formats Supported: Not Supported 00:20:40.967 Flexible Data Placement Supported: Not Supported 00:20:40.967 00:20:40.967 Controller Memory Buffer Support 00:20:40.967 ================================ 00:20:40.967 Supported: No 00:20:40.967 00:20:40.967 Persistent Memory Region Support 00:20:40.967 ================================ 00:20:40.967 Supported: No 00:20:40.967 00:20:40.967 Admin Command Set Attributes 00:20:40.967 ============================ 00:20:40.967 Security Send/Receive: Not Supported 00:20:40.967 Format NVM: Not Supported 00:20:40.967 Firmware Activate/Download: Not Supported 00:20:40.967 Namespace Management: Not Supported 00:20:40.967 Device Self-Test: Not Supported 00:20:40.967 Directives: Not Supported 00:20:40.967 NVMe-MI: Not Supported 00:20:40.967 Virtualization Management: Not Supported 00:20:40.967 Doorbell Buffer Config: Not Supported 00:20:40.967 Get LBA Status Capability: Not Supported 00:20:40.967 Command & Feature Lockdown Capability: Not Supported 00:20:40.967 Abort Command Limit: 4 00:20:40.967 Async Event Request Limit: 4 00:20:40.967 Number of Firmware Slots: N/A 00:20:40.967 Firmware Slot 1 Read-Only: N/A 00:20:40.967 Firmware Activation Without Reset: N/A 00:20:40.967 Multiple Update Detection Support: N/A 00:20:40.967 Firmware Update Granularity: No Information Provided 00:20:40.967 Per-Namespace SMART Log: No 00:20:40.968 Asymmetric Namespace Access Log Page: Not Supported 00:20:40.968 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:40.968 Command Effects Log Page: Supported 00:20:40.968 Get Log Page Extended Data: Supported 00:20:40.968 Telemetry Log Pages: Not Supported 00:20:40.968 Persistent Event Log Pages: Not Supported 00:20:40.968 Supported Log Pages Log Page: May Support 00:20:40.968 Commands Supported & Effects Log Page: Not Supported 00:20:40.968 Feature Identifiers & Effects Log Page:May Support 00:20:40.968 NVMe-MI Commands & Effects Log Page: May Support 00:20:40.968 Data Area 4 for Telemetry Log: Not Supported 00:20:40.968 Error Log Page Entries Supported: 128 00:20:40.968 Keep Alive: Supported 00:20:40.968 Keep Alive Granularity: 10000 ms 00:20:40.968 00:20:40.968 NVM Command Set Attributes 00:20:40.968 ========================== 00:20:40.968 Submission Queue Entry Size 00:20:40.968 Max: 64 00:20:40.968 Min: 64 00:20:40.968 Completion Queue Entry Size 00:20:40.968 Max: 16 00:20:40.968 Min: 16 00:20:40.968 Number of Namespaces: 32 00:20:40.968 Compare Command: Supported 00:20:40.968 Write Uncorrectable Command: Not Supported 00:20:40.968 Dataset Management Command: Supported 00:20:40.968 Write Zeroes Command: Supported 00:20:40.968 Set Features Save Field: Not Supported 00:20:40.968 Reservations: Supported 00:20:40.968 Timestamp: Not Supported 00:20:40.968 Copy: Supported 00:20:40.968 Volatile Write Cache: Present 00:20:40.968 Atomic Write Unit (Normal): 1 00:20:40.968 Atomic Write Unit (PFail): 1 00:20:40.968 Atomic Compare & Write Unit: 1 00:20:40.968 Fused Compare & Write: Supported 00:20:40.968 Scatter-Gather List 00:20:40.968 SGL Command Set: Supported 00:20:40.968 SGL Keyed: Supported 00:20:40.968 SGL Bit Bucket Descriptor: Not Supported 00:20:40.968 SGL Metadata Pointer: Not Supported 00:20:40.968 Oversized SGL: Not Supported 00:20:40.968 SGL Metadata Address: Not Supported 00:20:40.968 SGL Offset: Supported 00:20:40.968 Transport SGL Data Block: Not Supported 00:20:40.968 Replay Protected Memory Block: Not Supported 00:20:40.968 00:20:40.968 Firmware Slot Information 00:20:40.968 ========================= 00:20:40.968 Active slot: 1 00:20:40.968 Slot 1 Firmware Revision: 25.01 00:20:40.968 00:20:40.968 00:20:40.968 Commands Supported and Effects 00:20:40.968 ============================== 00:20:40.968 Admin Commands 00:20:40.968 -------------- 00:20:40.968 Get Log Page (02h): Supported 00:20:40.968 Identify (06h): Supported 00:20:40.968 Abort (08h): Supported 00:20:40.968 Set Features (09h): Supported 00:20:40.968 Get Features (0Ah): Supported 00:20:40.968 Asynchronous Event Request (0Ch): Supported 00:20:40.968 Keep Alive (18h): Supported 00:20:40.968 I/O Commands 00:20:40.968 ------------ 00:20:40.968 Flush (00h): Supported LBA-Change 00:20:40.968 Write (01h): Supported LBA-Change 00:20:40.968 Read (02h): Supported 00:20:40.968 Compare (05h): Supported 00:20:40.968 Write Zeroes (08h): Supported LBA-Change 00:20:40.968 Dataset Management (09h): Supported LBA-Change 00:20:40.968 Copy (19h): Supported LBA-Change 00:20:40.968 00:20:40.968 Error Log 00:20:40.968 ========= 00:20:40.968 00:20:40.968 Arbitration 00:20:40.968 =========== 00:20:40.968 Arbitration Burst: 1 00:20:40.968 00:20:40.968 Power Management 00:20:40.968 ================ 00:20:40.968 Number of Power States: 1 00:20:40.968 Current Power State: Power State #0 00:20:40.968 Power State #0: 00:20:40.968 Max Power: 0.00 W 00:20:40.968 Non-Operational State: Operational 00:20:40.968 Entry Latency: Not Reported 00:20:40.968 Exit Latency: Not Reported 00:20:40.968 Relative Read Throughput: 0 00:20:40.968 Relative Read Latency: 0 00:20:40.968 Relative Write Throughput: 0 00:20:40.968 Relative Write Latency: 0 00:20:40.968 Idle Power: Not Reported 00:20:40.968 Active Power: Not Reported 00:20:40.968 Non-Operational Permissive Mode: Not Supported 00:20:40.968 00:20:40.968 Health Information 00:20:40.968 ================== 00:20:40.968 Critical Warnings: 00:20:40.968 Available Spare Space: OK 00:20:40.968 Temperature: OK 00:20:40.968 Device Reliability: OK 00:20:40.968 Read Only: No 00:20:40.968 Volatile Memory Backup: OK 00:20:40.968 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:40.968 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:40.968 Available Spare: 0% 00:20:40.968 Available Spare Threshold: 0% 00:20:40.968 Life Percentage Used:[2024-09-28 01:33:36.817555] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:40.968 [2024-09-28 01:33:36.817572] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.968 [2024-09-28 01:33:36.817581] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.968 [2024-09-28 01:33:36.817587] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.968 [2024-09-28 01:33:36.817593] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:20:40.968 [2024-09-28 01:33:36.817605] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.968 [2024-09-28 01:33:36.817614] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.968 [2024-09-28 01:33:36.817619] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.968 [2024-09-28 01:33:36.817642] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:20:40.968 [2024-09-28 01:33:36.817851] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.968 [2024-09-28 01:33:36.817882] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:20:40.968 [2024-09-28 01:33:36.817895] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.968 [2024-09-28 01:33:36.817926] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:20:40.968 [2024-09-28 01:33:36.818761] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.968 [2024-09-28 01:33:36.822541] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.968 [2024-09-28 01:33:36.822566] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.968 [2024-09-28 01:33:36.822577] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:20:40.968 [2024-09-28 01:33:36.822691] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:40.968 [2024-09-28 01:33:36.822725] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:40.968 [2024-09-28 01:33:36.822739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.968 [2024-09-28 01:33:36.822750] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:20:40.968 [2024-09-28 01:33:36.822760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.968 [2024-09-28 01:33:36.822768] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:20:40.968 [2024-09-28 01:33:36.822777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.968 [2024-09-28 01:33:36.822785] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.968 [2024-09-28 01:33:36.822809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.968 [2024-09-28 01:33:36.822839] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.968 [2024-09-28 01:33:36.822862] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.968 [2024-09-28 01:33:36.822868] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.968 [2024-09-28 01:33:36.822882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.968 [2024-09-28 01:33:36.822917] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.968 [2024-09-28 01:33:36.822998] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.968 [2024-09-28 01:33:36.823011] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.968 [2024-09-28 01:33:36.823020] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.968 [2024-09-28 01:33:36.823027] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.968 [2024-09-28 01:33:36.823066] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.968 [2024-09-28 01:33:36.823080] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.969 [2024-09-28 01:33:36.823088] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.969 [2024-09-28 01:33:36.823103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.969 [2024-09-28 01:33:36.823139] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.969 [2024-09-28 01:33:36.823240] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.969 [2024-09-28 01:33:36.823253] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.969 [2024-09-28 01:33:36.823260] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.969 [2024-09-28 01:33:36.823267] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.969 [2024-09-28 01:33:36.823276] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:40.969 [2024-09-28 01:33:36.823285] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:40.969 [2024-09-28 01:33:36.823307] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.969 [2024-09-28 01:33:36.823316] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.969 [2024-09-28 01:33:36.823325] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.969 [2024-09-28 01:33:36.823339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.969 [2024-09-28 01:33:36.823393] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.969 [2024-09-28 01:33:36.823485] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.969 [2024-09-28 01:33:36.823498] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.969 [2024-09-28 01:33:36.823504] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.969 [2024-09-28 01:33:36.823723] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.969 [2024-09-28 01:33:36.823913] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.969 [2024-09-28 01:33:36.824004] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.969 [2024-09-28 01:33:36.824052] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.969 [2024-09-28 01:33:36.824171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.969 [2024-09-28 01:33:36.824306] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.969 [2024-09-28 01:33:36.824700] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.969 [2024-09-28 01:33:36.828554] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.969 [2024-09-28 01:33:36.828576] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.969 [2024-09-28 01:33:36.828584] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.969 [2024-09-28 01:33:36.828616] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.969 [2024-09-28 01:33:36.828626] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.969 [2024-09-28 01:33:36.828633] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:40.969 [2024-09-28 01:33:36.828647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.969 [2024-09-28 01:33:36.828682] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:40.969 [2024-09-28 01:33:36.828753] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.969 [2024-09-28 01:33:36.828768] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.969 [2024-09-28 01:33:36.828774] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.969 [2024-09-28 01:33:36.828781] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:40.969 [2024-09-28 01:33:36.828795] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:20:40.969 0% 00:20:40.969 Data Units Read: 0 00:20:40.969 Data Units Written: 0 00:20:40.969 Host Read Commands: 0 00:20:40.969 Host Write Commands: 0 00:20:40.969 Controller Busy Time: 0 minutes 00:20:40.969 Power Cycles: 0 00:20:40.969 Power On Hours: 0 hours 00:20:40.969 Unsafe Shutdowns: 0 00:20:40.969 Unrecoverable Media Errors: 0 00:20:40.969 Lifetime Error Log Entries: 0 00:20:40.969 Warning Temperature Time: 0 minutes 00:20:40.969 Critical Temperature Time: 0 minutes 00:20:40.969 00:20:40.969 Number of Queues 00:20:40.969 ================ 00:20:40.969 Number of I/O Submission Queues: 127 00:20:40.969 Number of I/O Completion Queues: 127 00:20:40.969 00:20:40.969 Active Namespaces 00:20:40.969 ================= 00:20:40.969 Namespace ID:1 00:20:40.969 Error Recovery Timeout: Unlimited 00:20:40.969 Command Set Identifier: NVM (00h) 00:20:40.969 Deallocate: Supported 00:20:40.969 Deallocated/Unwritten Error: Not Supported 00:20:40.969 Deallocated Read Value: Unknown 00:20:40.969 Deallocate in Write Zeroes: Not Supported 00:20:40.969 Deallocated Guard Field: 0xFFFF 00:20:40.969 Flush: Supported 00:20:40.969 Reservation: Supported 00:20:40.969 Namespace Sharing Capabilities: Multiple Controllers 00:20:40.969 Size (in LBAs): 131072 (0GiB) 00:20:40.969 Capacity (in LBAs): 131072 (0GiB) 00:20:40.969 Utilization (in LBAs): 131072 (0GiB) 00:20:40.969 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:40.969 EUI64: ABCDEF0123456789 00:20:40.969 UUID: 2c091a42-bb38-4920-8c76-776f18e3f9de 00:20:40.969 Thin Provisioning: Not Supported 00:20:40.969 Per-NS Atomic Units: Yes 00:20:40.969 Atomic Boundary Size (Normal): 0 00:20:40.969 Atomic Boundary Size (PFail): 0 00:20:40.969 Atomic Boundary Offset: 0 00:20:40.969 Maximum Single Source Range Length: 65535 00:20:40.969 Maximum Copy Length: 65535 00:20:40.969 Maximum Source Range Count: 1 00:20:40.969 NGUID/EUI64 Never Reused: No 00:20:40.969 Namespace Write Protected: No 00:20:40.969 Number of LBA Formats: 1 00:20:40.969 Current LBA Format: LBA Format #00 00:20:40.969 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:40.969 00:20:40.969 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:41.228 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:41.228 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.228 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.228 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.228 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:41.228 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:41.228 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:41.228 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:20:41.228 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:41.228 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:20:41.228 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:41.228 01:33:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:41.228 rmmod nvme_tcp 00:20:41.228 rmmod nvme_fabrics 00:20:41.228 rmmod nvme_keyring 00:20:41.228 01:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:41.228 01:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:20:41.228 01:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:20:41.228 01:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 79374 ']' 00:20:41.228 01:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 79374 00:20:41.228 01:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 79374 ']' 00:20:41.228 01:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 79374 00:20:41.228 01:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:20:41.228 01:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:41.228 01:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79374 00:20:41.228 killing process with pid 79374 00:20:41.228 01:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:41.228 01:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:41.228 01:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79374' 00:20:41.228 01:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 79374 00:20:41.228 01:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 79374 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:20:42.655 00:20:42.655 real 0m4.011s 00:20:42.655 user 0m9.854s 00:20:42.655 sys 0m0.983s 00:20:42.655 ************************************ 00:20:42.655 END TEST nvmf_identify 00:20:42.655 ************************************ 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.655 ************************************ 00:20:42.655 START TEST nvmf_perf 00:20:42.655 ************************************ 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:42.655 * Looking for test storage... 00:20:42.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:42.655 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:42.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.916 --rc genhtml_branch_coverage=1 00:20:42.916 --rc genhtml_function_coverage=1 00:20:42.916 --rc genhtml_legend=1 00:20:42.916 --rc geninfo_all_blocks=1 00:20:42.916 --rc geninfo_unexecuted_blocks=1 00:20:42.916 00:20:42.916 ' 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:42.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.916 --rc genhtml_branch_coverage=1 00:20:42.916 --rc genhtml_function_coverage=1 00:20:42.916 --rc genhtml_legend=1 00:20:42.916 --rc geninfo_all_blocks=1 00:20:42.916 --rc geninfo_unexecuted_blocks=1 00:20:42.916 00:20:42.916 ' 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:42.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.916 --rc genhtml_branch_coverage=1 00:20:42.916 --rc genhtml_function_coverage=1 00:20:42.916 --rc genhtml_legend=1 00:20:42.916 --rc geninfo_all_blocks=1 00:20:42.916 --rc geninfo_unexecuted_blocks=1 00:20:42.916 00:20:42.916 ' 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:42.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.916 --rc genhtml_branch_coverage=1 00:20:42.916 --rc genhtml_function_coverage=1 00:20:42.916 --rc genhtml_legend=1 00:20:42.916 --rc geninfo_all_blocks=1 00:20:42.916 --rc geninfo_unexecuted_blocks=1 00:20:42.916 00:20:42.916 ' 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.916 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.917 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:42.917 Cannot find device "nvmf_init_br" 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:42.917 Cannot find device "nvmf_init_br2" 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:42.917 Cannot find device "nvmf_tgt_br" 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:42.917 Cannot find device "nvmf_tgt_br2" 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:42.917 Cannot find device "nvmf_init_br" 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:42.917 Cannot find device "nvmf_init_br2" 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:42.917 Cannot find device "nvmf_tgt_br" 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:42.917 Cannot find device "nvmf_tgt_br2" 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:42.917 Cannot find device "nvmf_br" 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:42.917 Cannot find device "nvmf_init_if" 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:42.917 Cannot find device "nvmf_init_if2" 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:42.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:42.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:42.917 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:43.177 01:33:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:43.177 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:43.177 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:20:43.177 00:20:43.177 --- 10.0.0.3 ping statistics --- 00:20:43.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.177 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:43.177 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:43.177 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:20:43.177 00:20:43.177 --- 10.0.0.4 ping statistics --- 00:20:43.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.177 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:43.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:43.177 00:20:43.177 --- 10.0.0.1 ping statistics --- 00:20:43.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.177 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:43.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:20:43.177 00:20:43.177 --- 10.0.0.2 ping statistics --- 00:20:43.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.177 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # return 0 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:43.177 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:43.436 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:43.436 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:43.436 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:43.436 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:43.436 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=79644 00:20:43.436 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:43.436 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 79644 00:20:43.436 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 79644 ']' 00:20:43.436 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.436 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:43.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.436 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.436 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:43.436 01:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:43.436 [2024-09-28 01:33:39.245885] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:43.436 [2024-09-28 01:33:39.246297] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.695 [2024-09-28 01:33:39.419750] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:43.695 [2024-09-28 01:33:39.585579] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.695 [2024-09-28 01:33:39.585632] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.695 [2024-09-28 01:33:39.585666] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.695 [2024-09-28 01:33:39.585677] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.695 [2024-09-28 01:33:39.585688] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.695 [2024-09-28 01:33:39.585852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.695 [2024-09-28 01:33:39.586649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.695 [2024-09-28 01:33:39.586731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.695 [2024-09-28 01:33:39.586707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.953 [2024-09-28 01:33:39.752528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:44.521 01:33:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:44.521 01:33:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:20:44.521 01:33:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:44.521 01:33:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:44.521 01:33:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:44.521 01:33:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.521 01:33:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:44.521 01:33:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:45.088 01:33:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:45.088 01:33:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:45.347 01:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:20:45.347 01:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:45.606 01:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:45.606 01:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:20:45.606 01:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:45.606 01:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:45.606 01:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:45.865 [2024-09-28 01:33:41.591325] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.865 01:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:46.124 01:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:46.124 01:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:46.383 01:33:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:46.383 01:33:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:46.642 01:33:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:46.900 [2024-09-28 01:33:42.591746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:46.900 01:33:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:47.159 01:33:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:47.159 01:33:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:47.159 01:33:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:47.159 01:33:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:48.094 Initializing NVMe Controllers 00:20:48.094 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:48.094 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:48.094 Initialization complete. Launching workers. 00:20:48.094 ======================================================== 00:20:48.094 Latency(us) 00:20:48.094 Device Information : IOPS MiB/s Average min max 00:20:48.094 PCIE (0000:00:10.0) NSID 1 from core 0: 20684.54 80.80 1546.21 427.75 8219.79 00:20:48.094 ======================================================== 00:20:48.094 Total : 20684.54 80.80 1546.21 427.75 8219.79 00:20:48.094 00:20:48.353 01:33:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:49.730 Initializing NVMe Controllers 00:20:49.730 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:49.730 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:49.730 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:49.730 Initialization complete. Launching workers. 00:20:49.730 ======================================================== 00:20:49.730 Latency(us) 00:20:49.730 Device Information : IOPS MiB/s Average min max 00:20:49.730 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2996.99 11.71 333.16 128.35 7195.43 00:20:49.730 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8185.38 5401.77 12013.37 00:20:49.730 ======================================================== 00:20:49.730 Total : 3119.99 12.19 642.72 128.35 12013.37 00:20:49.730 00:20:49.730 01:33:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:51.107 Initializing NVMe Controllers 00:20:51.107 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:51.107 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:51.107 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:51.107 Initialization complete. Launching workers. 00:20:51.107 ======================================================== 00:20:51.107 Latency(us) 00:20:51.107 Device Information : IOPS MiB/s Average min max 00:20:51.107 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7924.37 30.95 4038.24 560.66 10619.67 00:20:51.107 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3682.63 14.39 8717.48 5831.71 15505.79 00:20:51.107 ======================================================== 00:20:51.107 Total : 11607.00 45.34 5522.85 560.66 15505.79 00:20:51.107 00:20:51.107 01:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:51.107 01:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:54.394 Initializing NVMe Controllers 00:20:54.394 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:54.394 Controller IO queue size 128, less than required. 00:20:54.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:54.394 Controller IO queue size 128, less than required. 00:20:54.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:54.394 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:54.394 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:54.394 Initialization complete. Launching workers. 00:20:54.394 ======================================================== 00:20:54.394 Latency(us) 00:20:54.394 Device Information : IOPS MiB/s Average min max 00:20:54.394 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1484.26 371.07 89307.77 42893.23 223802.39 00:20:54.394 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 638.53 159.63 222108.05 74895.45 470459.81 00:20:54.394 ======================================================== 00:20:54.394 Total : 2122.79 530.70 129253.55 42893.23 470459.81 00:20:54.394 00:20:54.394 01:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:20:54.394 Initializing NVMe Controllers 00:20:54.394 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:54.394 Controller IO queue size 128, less than required. 00:20:54.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:54.394 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:54.394 Controller IO queue size 128, less than required. 00:20:54.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:54.394 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:54.394 WARNING: Some requested NVMe devices were skipped 00:20:54.394 No valid NVMe controllers or AIO or URING devices found 00:20:54.394 01:33:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:20:57.683 Initializing NVMe Controllers 00:20:57.683 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.683 Controller IO queue size 128, less than required. 00:20:57.683 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.683 Controller IO queue size 128, less than required. 00:20:57.683 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.683 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:57.683 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:57.683 Initialization complete. Launching workers. 00:20:57.683 00:20:57.683 ==================== 00:20:57.683 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:57.683 TCP transport: 00:20:57.683 polls: 6860 00:20:57.683 idle_polls: 2980 00:20:57.683 sock_completions: 3880 00:20:57.683 nvme_completions: 5587 00:20:57.683 submitted_requests: 8348 00:20:57.683 queued_requests: 1 00:20:57.683 00:20:57.683 ==================== 00:20:57.683 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:57.683 TCP transport: 00:20:57.683 polls: 7262 00:20:57.683 idle_polls: 3615 00:20:57.683 sock_completions: 3647 00:20:57.683 nvme_completions: 6093 00:20:57.683 submitted_requests: 9146 00:20:57.683 queued_requests: 1 00:20:57.683 ======================================================== 00:20:57.683 Latency(us) 00:20:57.683 Device Information : IOPS MiB/s Average min max 00:20:57.683 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1395.02 348.75 94107.86 58818.74 241216.80 00:20:57.683 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1521.38 380.35 87003.81 45536.98 333417.48 00:20:57.683 ======================================================== 00:20:57.683 Total : 2916.40 729.10 90401.93 45536.98 333417.48 00:20:57.683 00:20:57.683 01:33:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:57.683 01:33:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:57.683 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:57.683 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:20:57.683 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:57.683 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=105d9177-ed38-4a7c-8b0c-58c23a0529e3 00:20:57.683 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 105d9177-ed38-4a7c-8b0c-58c23a0529e3 00:20:57.683 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=105d9177-ed38-4a7c-8b0c-58c23a0529e3 00:20:57.683 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:20:57.683 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:20:57.683 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:20:57.683 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:57.942 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:20:57.942 { 00:20:57.942 "uuid": "105d9177-ed38-4a7c-8b0c-58c23a0529e3", 00:20:57.942 "name": "lvs_0", 00:20:57.942 "base_bdev": "Nvme0n1", 00:20:57.942 "total_data_clusters": 1278, 00:20:57.942 "free_clusters": 1278, 00:20:57.942 "block_size": 4096, 00:20:57.942 "cluster_size": 4194304 00:20:57.942 } 00:20:57.942 ]' 00:20:57.942 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="105d9177-ed38-4a7c-8b0c-58c23a0529e3") .free_clusters' 00:20:58.200 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:20:58.200 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="105d9177-ed38-4a7c-8b0c-58c23a0529e3") .cluster_size' 00:20:58.200 5112 00:20:58.200 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:20:58.200 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:20:58.200 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:20:58.200 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:58.200 01:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 105d9177-ed38-4a7c-8b0c-58c23a0529e3 lbd_0 5112 00:20:58.459 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=2f1115fd-5a72-46c7-847b-bba0510e283d 00:20:58.459 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 2f1115fd-5a72-46c7-847b-bba0510e283d lvs_n_0 00:20:58.717 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=560534dc-09b9-4e45-8680-40c6a59eb8fd 00:20:58.717 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 560534dc-09b9-4e45-8680-40c6a59eb8fd 00:20:58.717 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=560534dc-09b9-4e45-8680-40c6a59eb8fd 00:20:58.717 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:20:58.717 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:20:58.717 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:20:58.717 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:58.974 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:20:58.974 { 00:20:58.974 "uuid": "105d9177-ed38-4a7c-8b0c-58c23a0529e3", 00:20:58.974 "name": "lvs_0", 00:20:58.974 "base_bdev": "Nvme0n1", 00:20:58.974 "total_data_clusters": 1278, 00:20:58.974 "free_clusters": 0, 00:20:58.974 "block_size": 4096, 00:20:58.974 "cluster_size": 4194304 00:20:58.974 }, 00:20:58.974 { 00:20:58.974 "uuid": "560534dc-09b9-4e45-8680-40c6a59eb8fd", 00:20:58.974 "name": "lvs_n_0", 00:20:58.974 "base_bdev": "2f1115fd-5a72-46c7-847b-bba0510e283d", 00:20:58.974 "total_data_clusters": 1276, 00:20:58.974 "free_clusters": 1276, 00:20:58.974 "block_size": 4096, 00:20:58.974 "cluster_size": 4194304 00:20:58.974 } 00:20:58.974 ]' 00:20:58.974 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="560534dc-09b9-4e45-8680-40c6a59eb8fd") .free_clusters' 00:20:59.232 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:20:59.232 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="560534dc-09b9-4e45-8680-40c6a59eb8fd") .cluster_size' 00:20:59.232 5104 00:20:59.232 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:20:59.232 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:20:59.232 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:20:59.232 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:59.232 01:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 560534dc-09b9-4e45-8680-40c6a59eb8fd lbd_nest_0 5104 00:20:59.490 01:33:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=3e9ec3b1-fc89-4449-aa84-64fcdd95f738 00:20:59.490 01:33:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:59.747 01:33:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:59.747 01:33:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 3e9ec3b1-fc89-4449-aa84-64fcdd95f738 00:21:00.006 01:33:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:00.264 01:33:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:21:00.264 01:33:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:21:00.264 01:33:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:00.264 01:33:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:00.264 01:33:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:00.521 Initializing NVMe Controllers 00:21:00.521 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:00.521 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:00.521 WARNING: Some requested NVMe devices were skipped 00:21:00.521 No valid NVMe controllers or AIO or URING devices found 00:21:00.521 01:33:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:00.521 01:33:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:12.730 Initializing NVMe Controllers 00:21:12.730 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:12.730 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:12.730 Initialization complete. Launching workers. 00:21:12.730 ======================================================== 00:21:12.730 Latency(us) 00:21:12.730 Device Information : IOPS MiB/s Average min max 00:21:12.730 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 844.90 105.61 1182.73 397.19 7696.61 00:21:12.730 ======================================================== 00:21:12.730 Total : 844.90 105.61 1182.73 397.19 7696.61 00:21:12.730 00:21:12.730 01:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:12.730 01:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:12.730 01:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:12.730 Initializing NVMe Controllers 00:21:12.730 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:12.730 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:12.730 WARNING: Some requested NVMe devices were skipped 00:21:12.730 No valid NVMe controllers or AIO or URING devices found 00:21:12.730 01:34:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:12.730 01:34:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:22.747 Initializing NVMe Controllers 00:21:22.747 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:22.747 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:22.747 Initialization complete. Launching workers. 00:21:22.747 ======================================================== 00:21:22.747 Latency(us) 00:21:22.747 Device Information : IOPS MiB/s Average min max 00:21:22.747 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1368.67 171.08 23418.33 6244.16 55530.19 00:21:22.747 ======================================================== 00:21:22.747 Total : 1368.67 171.08 23418.33 6244.16 55530.19 00:21:22.747 00:21:22.747 01:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:22.747 01:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:22.747 01:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:22.747 Initializing NVMe Controllers 00:21:22.747 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:22.747 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:22.747 WARNING: Some requested NVMe devices were skipped 00:21:22.747 No valid NVMe controllers or AIO or URING devices found 00:21:22.747 01:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:22.747 01:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:32.732 Initializing NVMe Controllers 00:21:32.732 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:32.732 Controller IO queue size 128, less than required. 00:21:32.732 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:32.732 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:32.732 Initialization complete. Launching workers. 00:21:32.732 ======================================================== 00:21:32.732 Latency(us) 00:21:32.732 Device Information : IOPS MiB/s Average min max 00:21:32.732 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3708.55 463.57 34532.79 13388.15 82232.02 00:21:32.732 ======================================================== 00:21:32.732 Total : 3708.55 463.57 34532.79 13388.15 82232.02 00:21:32.732 00:21:32.732 01:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:32.992 01:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3e9ec3b1-fc89-4449-aa84-64fcdd95f738 00:21:33.559 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:33.559 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2f1115fd-5a72-46c7-847b-bba0510e283d 00:21:33.820 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:34.080 rmmod nvme_tcp 00:21:34.080 rmmod nvme_fabrics 00:21:34.080 rmmod nvme_keyring 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 79644 ']' 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 79644 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 79644 ']' 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 79644 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:34.080 01:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79644 00:21:34.339 killing process with pid 79644 00:21:34.339 01:34:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:34.339 01:34:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:34.339 01:34:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79644' 00:21:34.339 01:34:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 79644 00:21:34.339 01:34:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 79644 00:21:36.245 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:36.245 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:36.245 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:36.245 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:36.245 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:21:36.245 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:36.245 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:21:36.245 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:36.245 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:36.245 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:36.504 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:36.504 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:36.504 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:36.504 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:36.504 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:36.504 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:36.504 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:36.504 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:36.504 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:36.504 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:36.505 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:36.505 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:36.505 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:36.505 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.505 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.505 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.505 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:21:36.505 00:21:36.505 real 0m53.954s 00:21:36.505 user 3m22.901s 00:21:36.505 sys 0m12.057s 00:21:36.505 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:36.505 01:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:36.505 ************************************ 00:21:36.505 END TEST nvmf_perf 00:21:36.505 ************************************ 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.765 ************************************ 00:21:36.765 START TEST nvmf_fio_host 00:21:36.765 ************************************ 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:36.765 * Looking for test storage... 00:21:36.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:36.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.765 --rc genhtml_branch_coverage=1 00:21:36.765 --rc genhtml_function_coverage=1 00:21:36.765 --rc genhtml_legend=1 00:21:36.765 --rc geninfo_all_blocks=1 00:21:36.765 --rc geninfo_unexecuted_blocks=1 00:21:36.765 00:21:36.765 ' 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:36.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.765 --rc genhtml_branch_coverage=1 00:21:36.765 --rc genhtml_function_coverage=1 00:21:36.765 --rc genhtml_legend=1 00:21:36.765 --rc geninfo_all_blocks=1 00:21:36.765 --rc geninfo_unexecuted_blocks=1 00:21:36.765 00:21:36.765 ' 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:36.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.765 --rc genhtml_branch_coverage=1 00:21:36.765 --rc genhtml_function_coverage=1 00:21:36.765 --rc genhtml_legend=1 00:21:36.765 --rc geninfo_all_blocks=1 00:21:36.765 --rc geninfo_unexecuted_blocks=1 00:21:36.765 00:21:36.765 ' 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:36.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.765 --rc genhtml_branch_coverage=1 00:21:36.765 --rc genhtml_function_coverage=1 00:21:36.765 --rc genhtml_legend=1 00:21:36.765 --rc geninfo_all_blocks=1 00:21:36.765 --rc geninfo_unexecuted_blocks=1 00:21:36.765 00:21:36.765 ' 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:36.765 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:36.766 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.766 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:37.027 Cannot find device "nvmf_init_br" 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:37.027 Cannot find device "nvmf_init_br2" 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:37.027 Cannot find device "nvmf_tgt_br" 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:37.027 Cannot find device "nvmf_tgt_br2" 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:37.027 Cannot find device "nvmf_init_br" 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:37.027 Cannot find device "nvmf_init_br2" 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:37.027 Cannot find device "nvmf_tgt_br" 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:37.027 Cannot find device "nvmf_tgt_br2" 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:37.027 Cannot find device "nvmf_br" 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:37.027 Cannot find device "nvmf_init_if" 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:37.027 Cannot find device "nvmf_init_if2" 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:37.027 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:37.027 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:37.027 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:37.028 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:37.288 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:37.288 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:37.288 01:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:37.288 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:37.288 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:21:37.288 00:21:37.288 --- 10.0.0.3 ping statistics --- 00:21:37.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.288 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:37.288 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:37.288 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:21:37.288 00:21:37.288 --- 10.0.0.4 ping statistics --- 00:21:37.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.288 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:37.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:21:37.288 00:21:37.288 --- 10.0.0.1 ping statistics --- 00:21:37.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.288 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:37.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:21:37.288 00:21:37.288 --- 10.0.0.2 ping statistics --- 00:21:37.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.288 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # return 0 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=80540 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 80540 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 80540 ']' 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:37.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:37.288 01:34:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.288 [2024-09-28 01:34:33.198385] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:37.288 [2024-09-28 01:34:33.198599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.548 [2024-09-28 01:34:33.372079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:37.808 [2024-09-28 01:34:33.522757] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.808 [2024-09-28 01:34:33.522812] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.808 [2024-09-28 01:34:33.522829] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.808 [2024-09-28 01:34:33.522839] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.808 [2024-09-28 01:34:33.522849] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.808 [2024-09-28 01:34:33.523090] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.808 [2024-09-28 01:34:33.523706] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.808 [2024-09-28 01:34:33.523852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:37.808 [2024-09-28 01:34:33.523920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.808 [2024-09-28 01:34:33.680026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:38.377 01:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:38.377 01:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:21:38.377 01:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:38.636 [2024-09-28 01:34:34.387570] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.636 01:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:38.636 01:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:38.636 01:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.636 01:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:38.895 Malloc1 00:21:38.895 01:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:39.154 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:39.413 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:39.672 [2024-09-28 01:34:35.532616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:39.672 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:39.930 01:34:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:40.188 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:40.188 fio-3.35 00:21:40.188 Starting 1 thread 00:21:42.722 00:21:42.722 test: (groupid=0, jobs=1): err= 0: pid=80610: Sat Sep 28 01:34:38 2024 00:21:42.722 read: IOPS=7661, BW=29.9MiB/s (31.4MB/s)(60.1MiB/2007msec) 00:21:42.722 slat (usec): min=2, max=168, avg= 3.45, stdev= 2.55 00:21:42.722 clat (usec): min=2009, max=14583, avg=8669.17, stdev=650.15 00:21:42.722 lat (usec): min=2051, max=14586, avg=8672.63, stdev=649.99 00:21:42.722 clat percentiles (usec): 00:21:42.722 | 1.00th=[ 7439], 5.00th=[ 7767], 10.00th=[ 7963], 20.00th=[ 8225], 00:21:42.722 | 30.00th=[ 8356], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8717], 00:21:42.722 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 00:21:42.722 | 99.00th=[10552], 99.50th=[11469], 99.90th=[12911], 99.95th=[14091], 00:21:42.722 | 99.99th=[14615] 00:21:42.722 bw ( KiB/s): min=28968, max=31440, per=99.92%, avg=30620.00, stdev=1119.06, samples=4 00:21:42.722 iops : min= 7242, max= 7860, avg=7655.00, stdev=279.76, samples=4 00:21:42.722 write: IOPS=7644, BW=29.9MiB/s (31.3MB/s)(59.9MiB/2007msec); 0 zone resets 00:21:42.722 slat (usec): min=2, max=122, avg= 3.54, stdev= 2.04 00:21:42.722 clat (usec): min=1463, max=14143, avg=7938.49, stdev=591.09 00:21:42.722 lat (usec): min=1475, max=14146, avg=7942.03, stdev=591.06 00:21:42.722 clat percentiles (usec): 00:21:42.722 | 1.00th=[ 6783], 5.00th=[ 7177], 10.00th=[ 7308], 20.00th=[ 7504], 00:21:42.722 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:21:42.722 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8717], 00:21:42.722 | 99.00th=[ 9503], 99.50th=[10290], 99.90th=[11994], 99.95th=[12911], 00:21:42.722 | 99.99th=[13960] 00:21:42.722 bw ( KiB/s): min=29952, max=31000, per=99.95%, avg=30562.00, stdev=475.78, samples=4 00:21:42.722 iops : min= 7488, max= 7750, avg=7640.50, stdev=118.94, samples=4 00:21:42.722 lat (msec) : 2=0.01%, 4=0.09%, 10=98.57%, 20=1.33% 00:21:42.722 cpu : usr=71.64%, sys=20.79%, ctx=10, majf=0, minf=1554 00:21:42.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:42.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:42.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:42.722 issued rwts: total=15376,15342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:42.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:42.722 00:21:42.722 Run status group 0 (all jobs): 00:21:42.722 READ: bw=29.9MiB/s (31.4MB/s), 29.9MiB/s-29.9MiB/s (31.4MB/s-31.4MB/s), io=60.1MiB (63.0MB), run=2007-2007msec 00:21:42.722 WRITE: bw=29.9MiB/s (31.3MB/s), 29.9MiB/s-29.9MiB/s (31.3MB/s-31.3MB/s), io=59.9MiB (62.8MB), run=2007-2007msec 00:21:42.722 ----------------------------------------------------- 00:21:42.722 Suppressions used: 00:21:42.722 count bytes template 00:21:42.722 1 57 /usr/src/fio/parse.c 00:21:42.722 1 8 libtcmalloc_minimal.so 00:21:42.722 ----------------------------------------------------- 00:21:42.722 00:21:42.722 01:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:42.722 01:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:42.722 01:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:42.722 01:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:42.722 01:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:42.722 01:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:42.722 01:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:42.722 01:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:42.722 01:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:42.722 01:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:42.722 01:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:42.722 01:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:42.722 01:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:42.722 01:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:42.722 01:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:21:42.722 01:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:42.722 01:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:42.981 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:42.981 fio-3.35 00:21:42.981 Starting 1 thread 00:21:45.513 00:21:45.513 test: (groupid=0, jobs=1): err= 0: pid=80656: Sat Sep 28 01:34:41 2024 00:21:45.513 read: IOPS=7093, BW=111MiB/s (116MB/s)(223MiB/2008msec) 00:21:45.513 slat (usec): min=3, max=131, avg= 4.55, stdev= 2.63 00:21:45.513 clat (usec): min=2932, max=25875, avg=10196.35, stdev=3026.82 00:21:45.513 lat (usec): min=2937, max=25879, avg=10200.90, stdev=3026.92 00:21:45.513 clat percentiles (usec): 00:21:45.513 | 1.00th=[ 4752], 5.00th=[ 5866], 10.00th=[ 6587], 20.00th=[ 7570], 00:21:45.513 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10552], 00:21:45.513 | 70.00th=[11600], 80.00th=[12780], 90.00th=[14615], 95.00th=[15795], 00:21:45.513 | 99.00th=[18220], 99.50th=[19268], 99.90th=[19792], 99.95th=[20055], 00:21:45.513 | 99.99th=[20579] 00:21:45.513 bw ( KiB/s): min=54112, max=59872, per=50.27%, avg=57048.00, stdev=2396.42, samples=4 00:21:45.513 iops : min= 3382, max= 3742, avg=3565.50, stdev=149.78, samples=4 00:21:45.513 write: IOPS=4031, BW=63.0MiB/s (66.1MB/s)(117MiB/1853msec); 0 zone resets 00:21:45.513 slat (usec): min=32, max=235, avg=38.99, stdev= 9.27 00:21:45.513 clat (usec): min=4218, max=25578, avg=13942.80, stdev=2637.76 00:21:45.513 lat (usec): min=4267, max=25614, avg=13981.78, stdev=2639.28 00:21:45.513 clat percentiles (usec): 00:21:45.513 | 1.00th=[ 8979], 5.00th=[10290], 10.00th=[10814], 20.00th=[11600], 00:21:45.513 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13566], 60.00th=[14484], 00:21:45.513 | 70.00th=[15139], 80.00th=[16188], 90.00th=[17171], 95.00th=[18482], 00:21:45.513 | 99.00th=[21103], 99.50th=[21627], 99.90th=[24511], 99.95th=[25035], 00:21:45.513 | 99.99th=[25560] 00:21:45.513 bw ( KiB/s): min=55712, max=62464, per=91.78%, avg=59208.00, stdev=2809.01, samples=4 00:21:45.513 iops : min= 3482, max= 3904, avg=3700.50, stdev=175.56, samples=4 00:21:45.513 lat (msec) : 4=0.09%, 10=36.37%, 20=62.71%, 50=0.83% 00:21:45.513 cpu : usr=82.96%, sys=12.90%, ctx=5, majf=0, minf=2194 00:21:45.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:21:45.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:45.513 issued rwts: total=14243,7471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:45.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:45.513 00:21:45.514 Run status group 0 (all jobs): 00:21:45.514 READ: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=223MiB (233MB), run=2008-2008msec 00:21:45.514 WRITE: bw=63.0MiB/s (66.1MB/s), 63.0MiB/s-63.0MiB/s (66.1MB/s-66.1MB/s), io=117MiB (122MB), run=1853-1853msec 00:21:45.514 ----------------------------------------------------- 00:21:45.514 Suppressions used: 00:21:45.514 count bytes template 00:21:45.514 1 57 /usr/src/fio/parse.c 00:21:45.514 406 38976 /usr/src/fio/iolog.c 00:21:45.514 1 8 libtcmalloc_minimal.so 00:21:45.514 ----------------------------------------------------- 00:21:45.514 00:21:45.514 01:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.772 01:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:45.772 01:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:45.772 01:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:45.772 01:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:21:45.773 01:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:21:45.773 01:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:45.773 01:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:45.773 01:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:21:45.773 01:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:21:45.773 01:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:45.773 01:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:21:46.031 Nvme0n1 00:21:46.031 01:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:46.290 01:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=a9215552-01e5-455d-beca-911e590696d4 00:21:46.290 01:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb a9215552-01e5-455d-beca-911e590696d4 00:21:46.290 01:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=a9215552-01e5-455d-beca-911e590696d4 00:21:46.290 01:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:21:46.290 01:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:21:46.290 01:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:21:46.290 01:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:46.549 01:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:21:46.549 { 00:21:46.549 "uuid": "a9215552-01e5-455d-beca-911e590696d4", 00:21:46.549 "name": "lvs_0", 00:21:46.549 "base_bdev": "Nvme0n1", 00:21:46.549 "total_data_clusters": 4, 00:21:46.549 "free_clusters": 4, 00:21:46.549 "block_size": 4096, 00:21:46.549 "cluster_size": 1073741824 00:21:46.549 } 00:21:46.549 ]' 00:21:46.549 01:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a9215552-01e5-455d-beca-911e590696d4") .free_clusters' 00:21:46.549 01:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:21:46.549 01:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a9215552-01e5-455d-beca-911e590696d4") .cluster_size' 00:21:46.807 4096 00:21:46.807 01:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:21:46.807 01:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:21:46.807 01:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:21:46.807 01:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:47.067 782f523b-fcea-42c0-9b18-b2c1d0268bb6 00:21:47.067 01:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:47.067 01:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:47.327 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:21:47.587 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:47.587 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:47.587 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:47.587 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:47.587 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:47.587 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:47.587 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:47.587 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:47.587 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:47.587 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:47.587 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:47.587 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:47.587 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:47.587 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:47.587 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:21:47.587 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:47.587 01:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:47.847 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:47.847 fio-3.35 00:21:47.847 Starting 1 thread 00:21:50.407 00:21:50.407 test: (groupid=0, jobs=1): err= 0: pid=80759: Sat Sep 28 01:34:46 2024 00:21:50.407 read: IOPS=5121, BW=20.0MiB/s (21.0MB/s)(40.2MiB/2011msec) 00:21:50.407 slat (usec): min=2, max=198, avg= 3.42, stdev= 3.56 00:21:50.407 clat (usec): min=3550, max=22039, avg=13028.81, stdev=1074.58 00:21:50.407 lat (usec): min=3556, max=22042, avg=13032.23, stdev=1074.35 00:21:50.407 clat percentiles (usec): 00:21:50.408 | 1.00th=[10814], 5.00th=[11469], 10.00th=[11863], 20.00th=[12256], 00:21:50.408 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:21:50.408 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14353], 95.00th=[14746], 00:21:50.408 | 99.00th=[15401], 99.50th=[15926], 99.90th=[18220], 99.95th=[20055], 00:21:50.408 | 99.99th=[20317] 00:21:50.408 bw ( KiB/s): min=19736, max=20872, per=99.97%, avg=20482.00, stdev=517.28, samples=4 00:21:50.408 iops : min= 4934, max= 5218, avg=5120.50, stdev=129.32, samples=4 00:21:50.408 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(40.2MiB/2011msec); 0 zone resets 00:21:50.408 slat (usec): min=2, max=176, avg= 3.52, stdev= 3.14 00:21:50.408 clat (usec): min=2142, max=22084, avg=11821.35, stdev=1035.50 00:21:50.408 lat (usec): min=2151, max=22087, avg=11824.87, stdev=1035.31 00:21:50.408 clat percentiles (usec): 00:21:50.408 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[10683], 20.00th=[11076], 00:21:50.408 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:21:50.408 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13042], 95.00th=[13304], 00:21:50.408 | 99.00th=[14091], 99.50th=[14615], 99.90th=[17957], 99.95th=[19792], 00:21:50.408 | 99.99th=[20317] 00:21:50.408 bw ( KiB/s): min=20112, max=20536, per=99.78%, avg=20415.50, stdev=203.39, samples=4 00:21:50.408 iops : min= 5028, max= 5134, avg=5103.75, stdev=50.76, samples=4 00:21:50.408 lat (msec) : 4=0.06%, 10=1.13%, 20=98.74%, 50=0.06% 00:21:50.408 cpu : usr=77.16%, sys=17.61%, ctx=156, majf=0, minf=1553 00:21:50.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:50.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:50.408 issued rwts: total=10300,10286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.408 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:50.408 00:21:50.408 Run status group 0 (all jobs): 00:21:50.408 READ: bw=20.0MiB/s (21.0MB/s), 20.0MiB/s-20.0MiB/s (21.0MB/s-21.0MB/s), io=40.2MiB (42.2MB), run=2011-2011msec 00:21:50.408 WRITE: bw=20.0MiB/s (20.9MB/s), 20.0MiB/s-20.0MiB/s (20.9MB/s-20.9MB/s), io=40.2MiB (42.1MB), run=2011-2011msec 00:21:50.408 ----------------------------------------------------- 00:21:50.408 Suppressions used: 00:21:50.408 count bytes template 00:21:50.408 1 58 /usr/src/fio/parse.c 00:21:50.408 1 8 libtcmalloc_minimal.so 00:21:50.408 ----------------------------------------------------- 00:21:50.408 00:21:50.408 01:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:50.679 01:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:50.936 01:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=3d68f1ca-6f66-4eb1-a307-8c792ca1a927 00:21:50.936 01:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 3d68f1ca-6f66-4eb1-a307-8c792ca1a927 00:21:50.936 01:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=3d68f1ca-6f66-4eb1-a307-8c792ca1a927 00:21:50.936 01:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:21:50.936 01:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:21:50.936 01:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:21:50.936 01:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:51.502 01:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:21:51.502 { 00:21:51.502 "uuid": "a9215552-01e5-455d-beca-911e590696d4", 00:21:51.502 "name": "lvs_0", 00:21:51.502 "base_bdev": "Nvme0n1", 00:21:51.502 "total_data_clusters": 4, 00:21:51.502 "free_clusters": 0, 00:21:51.502 "block_size": 4096, 00:21:51.502 "cluster_size": 1073741824 00:21:51.502 }, 00:21:51.502 { 00:21:51.502 "uuid": "3d68f1ca-6f66-4eb1-a307-8c792ca1a927", 00:21:51.502 "name": "lvs_n_0", 00:21:51.502 "base_bdev": "782f523b-fcea-42c0-9b18-b2c1d0268bb6", 00:21:51.502 "total_data_clusters": 1022, 00:21:51.502 "free_clusters": 1022, 00:21:51.502 "block_size": 4096, 00:21:51.502 "cluster_size": 4194304 00:21:51.502 } 00:21:51.502 ]' 00:21:51.502 01:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="3d68f1ca-6f66-4eb1-a307-8c792ca1a927") .free_clusters' 00:21:51.502 01:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:21:51.502 01:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="3d68f1ca-6f66-4eb1-a307-8c792ca1a927") .cluster_size' 00:21:51.502 4088 00:21:51.502 01:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:21:51.502 01:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:21:51.502 01:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:21:51.502 01:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:51.502 102642b1-f16f-42ea-9a54-2726092e4fdc 00:21:51.761 01:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:52.019 01:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:52.278 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:21:52.538 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:52.538 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:52.538 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:52.538 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:52.538 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:52.538 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:52.538 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:52.538 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:52.538 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:52.538 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:52.538 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:52.538 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:52.538 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:52.538 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:52.538 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:21:52.538 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:52.538 01:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:52.538 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:52.538 fio-3.35 00:21:52.538 Starting 1 thread 00:21:55.070 00:21:55.070 test: (groupid=0, jobs=1): err= 0: pid=80826: Sat Sep 28 01:34:50 2024 00:21:55.070 read: IOPS=4573, BW=17.9MiB/s (18.7MB/s)(35.9MiB/2011msec) 00:21:55.070 slat (usec): min=2, max=266, avg= 3.42, stdev= 4.11 00:21:55.070 clat (usec): min=3935, max=25524, avg=14582.10, stdev=1255.00 00:21:55.070 lat (usec): min=3943, max=25527, avg=14585.51, stdev=1254.59 00:21:55.070 clat percentiles (usec): 00:21:55.070 | 1.00th=[11863], 5.00th=[12780], 10.00th=[13173], 20.00th=[13698], 00:21:55.070 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14615], 60.00th=[14877], 00:21:55.070 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16450], 00:21:55.070 | 99.00th=[17171], 99.50th=[17957], 99.90th=[24249], 99.95th=[25297], 00:21:55.070 | 99.99th=[25560] 00:21:55.070 bw ( KiB/s): min=17136, max=18776, per=99.77%, avg=18254.00, stdev=754.59, samples=4 00:21:55.070 iops : min= 4284, max= 4694, avg=4563.50, stdev=188.65, samples=4 00:21:55.070 write: IOPS=4572, BW=17.9MiB/s (18.7MB/s)(35.9MiB/2011msec); 0 zone resets 00:21:55.070 slat (usec): min=2, max=145, avg= 3.60, stdev= 3.08 00:21:55.070 clat (usec): min=2568, max=25683, avg=13214.64, stdev=1197.87 00:21:55.070 lat (usec): min=2599, max=25686, avg=13218.24, stdev=1197.70 00:21:55.070 clat percentiles (usec): 00:21:55.070 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11863], 20.00th=[12256], 00:21:55.070 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:21:55.070 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14484], 95.00th=[14877], 00:21:55.070 | 99.00th=[15795], 99.50th=[16188], 99.90th=[22414], 99.95th=[25297], 00:21:55.070 | 99.99th=[25560] 00:21:55.070 bw ( KiB/s): min=18048, max=18512, per=99.94%, avg=18278.00, stdev=228.28, samples=4 00:21:55.070 iops : min= 4512, max= 4628, avg=4569.50, stdev=57.07, samples=4 00:21:55.070 lat (msec) : 4=0.02%, 10=0.37%, 20=99.36%, 50=0.26% 00:21:55.070 cpu : usr=76.37%, sys=18.56%, ctx=5, majf=0, minf=1553 00:21:55.070 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:55.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:55.070 issued rwts: total=9198,9195,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:55.070 00:21:55.070 Run status group 0 (all jobs): 00:21:55.070 READ: bw=17.9MiB/s (18.7MB/s), 17.9MiB/s-17.9MiB/s (18.7MB/s-18.7MB/s), io=35.9MiB (37.7MB), run=2011-2011msec 00:21:55.070 WRITE: bw=17.9MiB/s (18.7MB/s), 17.9MiB/s-17.9MiB/s (18.7MB/s-18.7MB/s), io=35.9MiB (37.7MB), run=2011-2011msec 00:21:55.070 ----------------------------------------------------- 00:21:55.070 Suppressions used: 00:21:55.070 count bytes template 00:21:55.070 1 58 /usr/src/fio/parse.c 00:21:55.070 1 8 libtcmalloc_minimal.so 00:21:55.070 ----------------------------------------------------- 00:21:55.070 00:21:55.329 01:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:55.588 01:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:21:55.588 01:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:55.846 01:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:56.105 01:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:56.363 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:56.622 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:57.189 rmmod nvme_tcp 00:21:57.189 rmmod nvme_fabrics 00:21:57.189 rmmod nvme_keyring 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 80540 ']' 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 80540 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 80540 ']' 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 80540 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:21:57.189 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:57.190 01:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80540 00:21:57.190 killing process with pid 80540 00:21:57.190 01:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:57.190 01:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:57.190 01:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80540' 00:21:57.190 01:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 80540 00:21:57.190 01:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 80540 00:21:58.126 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:58.126 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:58.126 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:58.126 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:21:58.126 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:21:58.126 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:58.126 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:21:58.126 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:58.126 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:58.126 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:58.385 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:58.385 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:58.385 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:58.385 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:58.385 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:58.385 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:58.385 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:58.385 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:58.385 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:58.385 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:58.385 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:58.386 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:58.386 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:58.386 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.386 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.386 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.386 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:21:58.386 ************************************ 00:21:58.386 END TEST nvmf_fio_host 00:21:58.386 ************************************ 00:21:58.386 00:21:58.386 real 0m21.829s 00:21:58.386 user 1m34.092s 00:21:58.386 sys 0m4.501s 00:21:58.386 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:58.386 01:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.645 ************************************ 00:21:58.645 START TEST nvmf_failover 00:21:58.645 ************************************ 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:58.645 * Looking for test storage... 00:21:58.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:58.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.645 --rc genhtml_branch_coverage=1 00:21:58.645 --rc genhtml_function_coverage=1 00:21:58.645 --rc genhtml_legend=1 00:21:58.645 --rc geninfo_all_blocks=1 00:21:58.645 --rc geninfo_unexecuted_blocks=1 00:21:58.645 00:21:58.645 ' 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:58.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.645 --rc genhtml_branch_coverage=1 00:21:58.645 --rc genhtml_function_coverage=1 00:21:58.645 --rc genhtml_legend=1 00:21:58.645 --rc geninfo_all_blocks=1 00:21:58.645 --rc geninfo_unexecuted_blocks=1 00:21:58.645 00:21:58.645 ' 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:58.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.645 --rc genhtml_branch_coverage=1 00:21:58.645 --rc genhtml_function_coverage=1 00:21:58.645 --rc genhtml_legend=1 00:21:58.645 --rc geninfo_all_blocks=1 00:21:58.645 --rc geninfo_unexecuted_blocks=1 00:21:58.645 00:21:58.645 ' 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:58.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.645 --rc genhtml_branch_coverage=1 00:21:58.645 --rc genhtml_function_coverage=1 00:21:58.645 --rc genhtml_legend=1 00:21:58.645 --rc geninfo_all_blocks=1 00:21:58.645 --rc geninfo_unexecuted_blocks=1 00:21:58.645 00:21:58.645 ' 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.645 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:58.646 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:58.646 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:58.905 Cannot find device "nvmf_init_br" 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:58.905 Cannot find device "nvmf_init_br2" 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:58.905 Cannot find device "nvmf_tgt_br" 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:58.905 Cannot find device "nvmf_tgt_br2" 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:58.905 Cannot find device "nvmf_init_br" 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:58.905 Cannot find device "nvmf_init_br2" 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:58.905 Cannot find device "nvmf_tgt_br" 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:58.905 Cannot find device "nvmf_tgt_br2" 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:58.905 Cannot find device "nvmf_br" 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:58.905 Cannot find device "nvmf_init_if" 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:58.905 Cannot find device "nvmf_init_if2" 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:58.905 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:58.905 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:58.905 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:59.164 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:59.164 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:59.165 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:21:59.165 00:21:59.165 --- 10.0.0.3 ping statistics --- 00:21:59.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.165 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:59.165 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:59.165 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:21:59.165 00:21:59.165 --- 10.0.0.4 ping statistics --- 00:21:59.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.165 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:59.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:59.165 00:21:59.165 --- 10.0.0.1 ping statistics --- 00:21:59.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.165 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:59.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:21:59.165 00:21:59.165 --- 10.0.0.2 ping statistics --- 00:21:59.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.165 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # return 0 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:59.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=81141 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 81141 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 81141 ']' 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:59.165 01:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:59.424 [2024-09-28 01:34:55.101869] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:59.424 [2024-09-28 01:34:55.102039] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.424 [2024-09-28 01:34:55.279411] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:59.683 [2024-09-28 01:34:55.510126] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.683 [2024-09-28 01:34:55.510212] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.683 [2024-09-28 01:34:55.510247] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.683 [2024-09-28 01:34:55.510263] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.683 [2024-09-28 01:34:55.510278] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.683 [2024-09-28 01:34:55.510532] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.683 [2024-09-28 01:34:55.510782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.683 [2024-09-28 01:34:55.510795] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:59.943 [2024-09-28 01:34:55.694120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:00.201 01:34:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:00.201 01:34:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:00.201 01:34:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:00.201 01:34:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:00.201 01:34:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:00.201 01:34:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.201 01:34:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:00.459 [2024-09-28 01:34:56.325385] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.459 01:34:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:01.025 Malloc0 00:22:01.025 01:34:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:01.284 01:34:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:01.284 01:34:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:01.542 [2024-09-28 01:34:57.419571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:01.542 01:34:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:01.801 [2024-09-28 01:34:57.639716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:01.801 01:34:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:22:02.059 [2024-09-28 01:34:57.871865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:22:02.059 01:34:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:02.059 01:34:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=81198 00:22:02.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.059 01:34:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:02.059 01:34:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 81198 /var/tmp/bdevperf.sock 00:22:02.059 01:34:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 81198 ']' 00:22:02.059 01:34:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.059 01:34:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.059 01:34:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.059 01:34:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.059 01:34:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:02.993 01:34:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.993 01:34:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:02.993 01:34:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:03.252 NVMe0n1 00:22:03.252 01:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:03.819 00:22:03.819 01:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=81222 00:22:03.819 01:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:03.819 01:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:04.753 01:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:05.011 01:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:08.295 01:35:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:08.295 00:22:08.295 01:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:08.553 01:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:11.839 01:35:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:11.839 [2024-09-28 01:35:07.684351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:11.839 01:35:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:13.229 01:35:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:22:13.229 01:35:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 81222 00:22:19.822 { 00:22:19.822 "results": [ 00:22:19.822 { 00:22:19.822 "job": "NVMe0n1", 00:22:19.822 "core_mask": "0x1", 00:22:19.822 "workload": "verify", 00:22:19.822 "status": "finished", 00:22:19.822 "verify_range": { 00:22:19.822 "start": 0, 00:22:19.822 "length": 16384 00:22:19.822 }, 00:22:19.822 "queue_depth": 128, 00:22:19.822 "io_size": 4096, 00:22:19.822 "runtime": 15.010411, 00:22:19.822 "iops": 8079.858706067409, 00:22:19.822 "mibps": 31.561948070575816, 00:22:19.822 "io_failed": 3525, 00:22:19.822 "io_timeout": 0, 00:22:19.822 "avg_latency_us": 15364.27807799242, 00:22:19.822 "min_latency_us": 595.7818181818182, 00:22:19.822 "max_latency_us": 17754.298181818183 00:22:19.822 } 00:22:19.822 ], 00:22:19.822 "core_count": 1 00:22:19.822 } 00:22:19.822 01:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 81198 00:22:19.822 01:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 81198 ']' 00:22:19.822 01:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 81198 00:22:19.822 01:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:22:19.822 01:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:19.822 01:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81198 00:22:19.822 killing process with pid 81198 00:22:19.822 01:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:19.822 01:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:19.822 01:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81198' 00:22:19.822 01:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 81198 00:22:19.822 01:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 81198 00:22:19.822 01:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:19.822 [2024-09-28 01:34:58.001366] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:19.822 [2024-09-28 01:34:58.001593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81198 ] 00:22:19.822 [2024-09-28 01:34:58.177985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.822 [2024-09-28 01:34:58.388988] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.822 [2024-09-28 01:34:58.545249] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:19.822 Running I/O for 15 seconds... 00:22:19.822 6293.00 IOPS, 24.58 MiB/s [2024-09-28 01:35:00.737109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.737177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.737238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.737260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.737285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.737305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.737328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.737348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.737380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.737399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.737424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.737444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.737498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.737520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.737543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.737563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.737586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.737606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.737630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.737649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.737673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.737713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.737739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.737759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.737783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.737803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.737828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.737864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.737887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.737906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.737928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.737948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.737971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.737990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.738013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.738032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.738055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.738074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.738099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.738119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.738142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.822 [2024-09-28 01:35:00.738161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.822 [2024-09-28 01:35:00.738186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.738205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.738228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.738248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.738280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.738300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.738323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.738343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.738366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.738386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.738408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.738428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.738451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.738499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.738525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.738546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.738572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.738592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.738621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.738642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.738667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.738688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.738711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.738731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.738755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.738775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.738798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.738818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.738843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.738878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.738909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.738929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.738954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.738973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.738996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.739086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.739135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.739183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.739230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.739296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.739344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.739434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.739479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.739535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.739599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.739670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.739720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.739766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.739811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.739858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.739903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.739965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.739985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.740009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.740029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.740052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.740072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.740096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.740116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.740140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.740160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.740184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.823 [2024-09-28 01:35:00.740204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-09-28 01:35:00.740236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.740258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.740284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.740304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.740328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.740348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.740372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.740392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.740415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.740434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.740458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.740478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.740514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.740536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.740561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.740582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.740607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.740627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.740651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.740670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.740694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.740713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.740737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.740756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.740780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.740823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.740849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.740870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.740894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.740914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.740939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.740959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.740985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.741006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.741053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.741098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.741152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.741197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.741241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.741286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.741331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.741377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.824 [2024-09-28 01:35:00.741429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.824 [2024-09-28 01:35:00.741503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.824 [2024-09-28 01:35:00.741548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.824 [2024-09-28 01:35:00.741592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.824 [2024-09-28 01:35:00.741635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.824 [2024-09-28 01:35:00.741679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.824 [2024-09-28 01:35:00.741722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.824 [2024-09-28 01:35:00.741770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.824 [2024-09-28 01:35:00.741818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.824 [2024-09-28 01:35:00.741861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.824 [2024-09-28 01:35:00.741904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.824 [2024-09-28 01:35:00.741947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.741971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.824 [2024-09-28 01:35:00.741991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.742021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.824 [2024-09-28 01:35:00.742042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.742065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.824 [2024-09-28 01:35:00.742085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.824 [2024-09-28 01:35:00.742110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.742130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.825 [2024-09-28 01:35:00.742183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.825 [2024-09-28 01:35:00.742226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.742269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.742314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.742367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.742413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.742485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.742534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.742581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.825 [2024-09-28 01:35:00.742633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.742680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.742725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.742769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.742813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.742873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.742920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.742964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.742987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.743006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.743058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.743082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.743109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.743131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.743158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.743180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.743206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.743228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.743263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.743286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.743315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.743337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.743406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.825 [2024-09-28 01:35:00.743440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.743459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:22:19.825 [2024-09-28 01:35:00.743482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.825 [2024-09-28 01:35:00.743498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.825 [2024-09-28 01:35:00.743528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58840 len:8 PRP1 0x0 PRP2 0x0 00:22:19.825 [2024-09-28 01:35:00.743551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.743788] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:22:19.825 [2024-09-28 01:35:00.743822] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:22:19.825 [2024-09-28 01:35:00.743890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.825 [2024-09-28 01:35:00.743917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.743938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.825 [2024-09-28 01:35:00.743956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.743975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.825 [2024-09-28 01:35:00.743992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.744027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.825 [2024-09-28 01:35:00.744046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:00.744063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:19.825 [2024-09-28 01:35:00.747752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:19.825 [2024-09-28 01:35:00.747832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:19.825 [2024-09-28 01:35:00.781643] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:19.825 6932.00 IOPS, 27.08 MiB/s 7437.33 IOPS, 29.05 MiB/s 7671.00 IOPS, 29.96 MiB/s [2024-09-28 01:35:04.392706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.825 [2024-09-28 01:35:04.392795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:04.392854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.825 [2024-09-28 01:35:04.392877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:04.392899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.825 [2024-09-28 01:35:04.392917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:04.392937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.825 [2024-09-28 01:35:04.392955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:04.392975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.825 [2024-09-28 01:35:04.393011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.825 [2024-09-28 01:35:04.393032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.826 [2024-09-28 01:35:04.393050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.826 [2024-09-28 01:35:04.393089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.826 [2024-09-28 01:35:04.393128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.826 [2024-09-28 01:35:04.393166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.826 [2024-09-28 01:35:04.393205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.826 [2024-09-28 01:35:04.393244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.826 [2024-09-28 01:35:04.393282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.826 [2024-09-28 01:35:04.393321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.826 [2024-09-28 01:35:04.393369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.826 [2024-09-28 01:35:04.393411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.826 [2024-09-28 01:35:04.393450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.826 [2024-09-28 01:35:04.393523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.826 [2024-09-28 01:35:04.393567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.826 [2024-09-28 01:35:04.393608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.826 [2024-09-28 01:35:04.393648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.826 [2024-09-28 01:35:04.393690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.826 [2024-09-28 01:35:04.393731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.826 [2024-09-28 01:35:04.393771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.826 [2024-09-28 01:35:04.393812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.826 [2024-09-28 01:35:04.393852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.826 [2024-09-28 01:35:04.393892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.826 [2024-09-28 01:35:04.393941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.393962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.826 [2024-09-28 01:35:04.393983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.394005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.826 [2024-09-28 01:35:04.394025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.394046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.826 [2024-09-28 01:35:04.394065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.394101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.826 [2024-09-28 01:35:04.394119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.394157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.826 [2024-09-28 01:35:04.394177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.394197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.826 [2024-09-28 01:35:04.394215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.394236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.826 [2024-09-28 01:35:04.394254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.394275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.826 [2024-09-28 01:35:04.394293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.394314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.826 [2024-09-28 01:35:04.394332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.394352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.826 [2024-09-28 01:35:04.394371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.826 [2024-09-28 01:35:04.394391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.826 [2024-09-28 01:35:04.394409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.394429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.394485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.394511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.394532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.394553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.394572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.394593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.394613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.394634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.394653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.394674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.394694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.394714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.394733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.394754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.394773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.394794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.394813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.394834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.394853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.394875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.394905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.394926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.394945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.394965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.827 [2024-09-28 01:35:04.394983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.827 [2024-09-28 01:35:04.395075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.827 [2024-09-28 01:35:04.395122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.827 [2024-09-28 01:35:04.395165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.827 [2024-09-28 01:35:04.395209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.827 [2024-09-28 01:35:04.395253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.827 [2024-09-28 01:35:04.395297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.827 [2024-09-28 01:35:04.395341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.827 [2024-09-28 01:35:04.395436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.827 [2024-09-28 01:35:04.395499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.827 [2024-09-28 01:35:04.395541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.827 [2024-09-28 01:35:04.395600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.827 [2024-09-28 01:35:04.395643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.827 [2024-09-28 01:35:04.395689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.827 [2024-09-28 01:35:04.395740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.827 [2024-09-28 01:35:04.395782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.395822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.395879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.395919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.395973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.395993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.396012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.396032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.396050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.396070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.396088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.396109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.396127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.396147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.396165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.396185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.396203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.396224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.396249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.827 [2024-09-28 01:35:04.396270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.827 [2024-09-28 01:35:04.396289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.396310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.828 [2024-09-28 01:35:04.396328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.396349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.828 [2024-09-28 01:35:04.396368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.396388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.396407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.396427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.396446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.396483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.396514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.396537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.396556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.396576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.396596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.396616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.396635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.396656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.396675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.396695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.396714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.396734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.396753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.396782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.396802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.396822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.396841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.396862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.396881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.396901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.396936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.396956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.396974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.396999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.397018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.397073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.828 [2024-09-28 01:35:04.397112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.828 [2024-09-28 01:35:04.397151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.828 [2024-09-28 01:35:04.397191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.828 [2024-09-28 01:35:04.397229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.828 [2024-09-28 01:35:04.397268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.828 [2024-09-28 01:35:04.397307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.828 [2024-09-28 01:35:04.397356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.828 [2024-09-28 01:35:04.397394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.397433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.397502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.397544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.397584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.397624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.397665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.397706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.397746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.397786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.397826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.397873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.397929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.397967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.397988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.828 [2024-09-28 01:35:04.398006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.828 [2024-09-28 01:35:04.398027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.829 [2024-09-28 01:35:04.398045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:04.398065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ba00 is same with the state(6) to be set 00:22:19.829 [2024-09-28 01:35:04.398087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.829 [2024-09-28 01:35:04.398103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.829 [2024-09-28 01:35:04.398120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46088 len:8 PRP1 0x0 PRP2 0x0 00:22:19.829 [2024-09-28 01:35:04.398138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:04.398157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.829 [2024-09-28 01:35:04.398172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.829 [2024-09-28 01:35:04.398186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46544 len:8 PRP1 0x0 PRP2 0x0 00:22:19.829 [2024-09-28 01:35:04.398204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:04.398221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.829 [2024-09-28 01:35:04.398235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.829 [2024-09-28 01:35:04.398249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46552 len:8 PRP1 0x0 PRP2 0x0 00:22:19.829 [2024-09-28 01:35:04.398267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:04.398284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.829 [2024-09-28 01:35:04.398297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.829 [2024-09-28 01:35:04.398312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46560 len:8 PRP1 0x0 PRP2 0x0 00:22:19.829 [2024-09-28 01:35:04.398328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:04.398346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.829 [2024-09-28 01:35:04.398360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.829 [2024-09-28 01:35:04.398374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46568 len:8 PRP1 0x0 PRP2 0x0 00:22:19.829 [2024-09-28 01:35:04.398399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:04.398417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.829 [2024-09-28 01:35:04.398431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.829 [2024-09-28 01:35:04.398446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46576 len:8 PRP1 0x0 PRP2 0x0 00:22:19.829 [2024-09-28 01:35:04.398492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:04.398511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.829 [2024-09-28 01:35:04.398525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.829 [2024-09-28 01:35:04.398540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46584 len:8 PRP1 0x0 PRP2 0x0 00:22:19.829 [2024-09-28 01:35:04.398558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:04.398576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.829 [2024-09-28 01:35:04.398590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.829 [2024-09-28 01:35:04.398604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46592 len:8 PRP1 0x0 PRP2 0x0 00:22:19.829 [2024-09-28 01:35:04.398622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:04.398640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.829 [2024-09-28 01:35:04.398654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.829 [2024-09-28 01:35:04.398668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46600 len:8 PRP1 0x0 PRP2 0x0 00:22:19.829 [2024-09-28 01:35:04.398686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:04.398936] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002ba00 was disconnected and freed. reset controller. 00:22:19.829 [2024-09-28 01:35:04.398963] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:22:19.829 [2024-09-28 01:35:04.399052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.829 [2024-09-28 01:35:04.399081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:04.399104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.829 [2024-09-28 01:35:04.399123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:04.399143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.829 [2024-09-28 01:35:04.399162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:04.399182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.829 [2024-09-28 01:35:04.399201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:04.399219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:19.829 [2024-09-28 01:35:04.399283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:19.829 [2024-09-28 01:35:04.403194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:19.829 [2024-09-28 01:35:04.436915] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:19.829 7700.00 IOPS, 30.08 MiB/s 7809.83 IOPS, 30.51 MiB/s 7887.43 IOPS, 30.81 MiB/s 7948.50 IOPS, 31.05 MiB/s 7992.33 IOPS, 31.22 MiB/s [2024-09-28 01:35:08.980201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.829 [2024-09-28 01:35:08.980286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:08.980338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.829 [2024-09-28 01:35:08.980365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:08.980388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.829 [2024-09-28 01:35:08.980407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:08.980427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.829 [2024-09-28 01:35:08.980446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:08.980482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.829 [2024-09-28 01:35:08.980503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:08.980524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.829 [2024-09-28 01:35:08.980542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:08.980562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.829 [2024-09-28 01:35:08.980582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:08.980602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.829 [2024-09-28 01:35:08.980620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:08.980640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.829 [2024-09-28 01:35:08.980658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:08.980678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.829 [2024-09-28 01:35:08.980696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:08.980716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.829 [2024-09-28 01:35:08.980734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:08.980772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.829 [2024-09-28 01:35:08.980792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:08.980812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.829 [2024-09-28 01:35:08.980830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:08.980849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.829 [2024-09-28 01:35:08.980868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:08.980888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.829 [2024-09-28 01:35:08.980922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:08.980943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.829 [2024-09-28 01:35:08.980962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.829 [2024-09-28 01:35:08.980982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.829 [2024-09-28 01:35:08.981001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.830 [2024-09-28 01:35:08.981041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.830 [2024-09-28 01:35:08.981079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.830 [2024-09-28 01:35:08.981116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.830 [2024-09-28 01:35:08.981155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.830 [2024-09-28 01:35:08.981192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.830 [2024-09-28 01:35:08.981230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.830 [2024-09-28 01:35:08.981276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.830 [2024-09-28 01:35:08.981315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.981352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.981390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.981428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.981482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.981521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.981559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.981597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.981634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.981672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.981710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.981748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.981806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.981846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.981885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.981923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.981961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.981981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.981999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.982018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.982036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.982057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.982075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.982094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.982112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.982132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.830 [2024-09-28 01:35:08.982150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.982170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.830 [2024-09-28 01:35:08.982188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.982207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.830 [2024-09-28 01:35:08.982226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.982245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.830 [2024-09-28 01:35:08.982263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.982290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.830 [2024-09-28 01:35:08.982309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.982329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.830 [2024-09-28 01:35:08.982347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.982367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.830 [2024-09-28 01:35:08.982386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.982406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.830 [2024-09-28 01:35:08.982424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.982469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.982491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.982512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.982531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.982551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.982569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.830 [2024-09-28 01:35:08.982590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.830 [2024-09-28 01:35:08.982608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.982628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.982647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.982667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.982685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.982713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.982732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.982752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.982786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.982806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.982832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.982853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.982871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.982890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.982909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.982928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.982947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.982966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.982984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.983065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.983109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.983149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.983189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.983229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.831 [2024-09-28 01:35:08.983268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.831 [2024-09-28 01:35:08.983307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.831 [2024-09-28 01:35:08.983346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.831 [2024-09-28 01:35:08.983396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.831 [2024-09-28 01:35:08.983449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.831 [2024-09-28 01:35:08.983500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.831 [2024-09-28 01:35:08.983544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.831 [2024-09-28 01:35:08.983596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.831 [2024-09-28 01:35:08.983644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.831 [2024-09-28 01:35:08.983684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.831 [2024-09-28 01:35:08.983722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.831 [2024-09-28 01:35:08.983760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.831 [2024-09-28 01:35:08.983799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.831 [2024-09-28 01:35:08.983837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.831 [2024-09-28 01:35:08.983876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.831 [2024-09-28 01:35:08.983922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.983963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.983983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.984002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.984022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.984041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.984060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.984079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.984099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.831 [2024-09-28 01:35:08.984117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.831 [2024-09-28 01:35:08.984138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.832 [2024-09-28 01:35:08.984156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.984176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.832 [2024-09-28 01:35:08.984194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.984214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.832 [2024-09-28 01:35:08.984233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.984255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.832 [2024-09-28 01:35:08.984275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.984295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.832 [2024-09-28 01:35:08.984313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.984334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.832 [2024-09-28 01:35:08.984353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.984373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.832 [2024-09-28 01:35:08.984391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.984411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.832 [2024-09-28 01:35:08.984437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.984474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.832 [2024-09-28 01:35:08.984494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.984515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.832 [2024-09-28 01:35:08.984533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.984554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.832 [2024-09-28 01:35:08.984573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.984593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.832 [2024-09-28 01:35:08.984611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.984631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:22:19.832 [2024-09-28 01:35:08.984653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.832 [2024-09-28 01:35:08.984669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.832 [2024-09-28 01:35:08.984685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95496 len:8 PRP1 0x0 PRP2 0x0 00:22:19.832 [2024-09-28 01:35:08.984702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.984722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.832 [2024-09-28 01:35:08.984737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.832 [2024-09-28 01:35:08.984751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95952 len:8 PRP1 0x0 PRP2 0x0 00:22:19.832 [2024-09-28 01:35:08.984768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.984786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.832 [2024-09-28 01:35:08.984800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.832 [2024-09-28 01:35:08.984814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95960 len:8 PRP1 0x0 PRP2 0x0 00:22:19.832 [2024-09-28 01:35:08.984831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.984848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.832 [2024-09-28 01:35:08.984864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.832 [2024-09-28 01:35:08.984880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95968 len:8 PRP1 0x0 PRP2 0x0 00:22:19.832 [2024-09-28 01:35:08.984897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.984914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.832 [2024-09-28 01:35:08.984928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.832 [2024-09-28 01:35:08.984951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95976 len:8 PRP1 0x0 PRP2 0x0 00:22:19.832 [2024-09-28 01:35:08.984969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.984987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.832 [2024-09-28 01:35:08.985001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.832 [2024-09-28 01:35:08.985015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95984 len:8 PRP1 0x0 PRP2 0x0 00:22:19.832 [2024-09-28 01:35:08.985032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.985049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.832 [2024-09-28 01:35:08.985063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.832 [2024-09-28 01:35:08.985077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95992 len:8 PRP1 0x0 PRP2 0x0 00:22:19.832 [2024-09-28 01:35:08.985094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.985111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.832 [2024-09-28 01:35:08.985125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.832 [2024-09-28 01:35:08.985140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96000 len:8 PRP1 0x0 PRP2 0x0 00:22:19.832 [2024-09-28 01:35:08.985157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.985174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.832 [2024-09-28 01:35:08.985188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.832 [2024-09-28 01:35:08.985202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96008 len:8 PRP1 0x0 PRP2 0x0 00:22:19.832 [2024-09-28 01:35:08.985218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.985236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.832 [2024-09-28 01:35:08.985249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.832 [2024-09-28 01:35:08.985264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96016 len:8 PRP1 0x0 PRP2 0x0 00:22:19.832 [2024-09-28 01:35:08.985280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.985297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.832 [2024-09-28 01:35:08.985311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.832 [2024-09-28 01:35:08.985325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96024 len:8 PRP1 0x0 PRP2 0x0 00:22:19.832 [2024-09-28 01:35:08.985342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.985359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.832 [2024-09-28 01:35:08.985375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.832 [2024-09-28 01:35:08.985391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96032 len:8 PRP1 0x0 PRP2 0x0 00:22:19.832 [2024-09-28 01:35:08.985408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.985425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.832 [2024-09-28 01:35:08.985460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.832 [2024-09-28 01:35:08.985478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96040 len:8 PRP1 0x0 PRP2 0x0 00:22:19.832 [2024-09-28 01:35:08.985495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.985513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.832 [2024-09-28 01:35:08.985527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.832 [2024-09-28 01:35:08.985542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96048 len:8 PRP1 0x0 PRP2 0x0 00:22:19.832 [2024-09-28 01:35:08.985559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.985576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.832 [2024-09-28 01:35:08.985590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.832 [2024-09-28 01:35:08.985604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96056 len:8 PRP1 0x0 PRP2 0x0 00:22:19.832 [2024-09-28 01:35:08.985621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.985638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.832 [2024-09-28 01:35:08.985652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.832 [2024-09-28 01:35:08.985666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96064 len:8 PRP1 0x0 PRP2 0x0 00:22:19.832 [2024-09-28 01:35:08.985683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.832 [2024-09-28 01:35:08.985700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.833 [2024-09-28 01:35:08.985713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.833 [2024-09-28 01:35:08.985728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96072 len:8 PRP1 0x0 PRP2 0x0 00:22:19.833 [2024-09-28 01:35:08.985745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.833 [2024-09-28 01:35:08.985762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.833 [2024-09-28 01:35:08.985775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.833 [2024-09-28 01:35:08.985790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96080 len:8 PRP1 0x0 PRP2 0x0 00:22:19.833 [2024-09-28 01:35:08.985806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.833 [2024-09-28 01:35:08.985824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.833 [2024-09-28 01:35:08.985838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.833 [2024-09-28 01:35:08.985853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96088 len:8 PRP1 0x0 PRP2 0x0 00:22:19.833 [2024-09-28 01:35:08.985869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.833 [2024-09-28 01:35:08.985900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.833 [2024-09-28 01:35:08.985918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.833 [2024-09-28 01:35:08.985936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96096 len:8 PRP1 0x0 PRP2 0x0 00:22:19.833 [2024-09-28 01:35:08.985954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.833 [2024-09-28 01:35:08.985980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.833 [2024-09-28 01:35:08.985995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.833 [2024-09-28 01:35:08.986010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96104 len:8 PRP1 0x0 PRP2 0x0 00:22:19.833 [2024-09-28 01:35:08.986027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.833 [2024-09-28 01:35:08.986044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.833 [2024-09-28 01:35:08.986058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.833 [2024-09-28 01:35:08.986072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96112 len:8 PRP1 0x0 PRP2 0x0 00:22:19.833 [2024-09-28 01:35:08.986089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.833 [2024-09-28 01:35:08.986106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.833 [2024-09-28 01:35:08.986120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.833 [2024-09-28 01:35:08.986135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96120 len:8 PRP1 0x0 PRP2 0x0 00:22:19.833 [2024-09-28 01:35:08.986152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.833 [2024-09-28 01:35:08.986169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.833 [2024-09-28 01:35:08.986183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.833 [2024-09-28 01:35:08.986197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96128 len:8 PRP1 0x0 PRP2 0x0 00:22:19.833 [2024-09-28 01:35:08.986214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.833 [2024-09-28 01:35:08.986456] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002c180 was disconnected and freed. reset controller. 00:22:19.833 [2024-09-28 01:35:08.986483] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:22:19.833 [2024-09-28 01:35:08.986562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.833 [2024-09-28 01:35:08.986590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.833 [2024-09-28 01:35:08.986612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.833 [2024-09-28 01:35:08.986630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.833 [2024-09-28 01:35:08.986649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.833 [2024-09-28 01:35:08.986666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.833 [2024-09-28 01:35:08.986685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.833 [2024-09-28 01:35:08.986702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.833 [2024-09-28 01:35:08.986720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:19.833 [2024-09-28 01:35:08.986786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:19.833 [2024-09-28 01:35:08.990326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:19.833 [2024-09-28 01:35:09.030880] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:19.833 7969.00 IOPS, 31.13 MiB/s 7987.09 IOPS, 31.20 MiB/s 8014.83 IOPS, 31.31 MiB/s 8038.31 IOPS, 31.40 MiB/s 8064.00 IOPS, 31.50 MiB/s 8078.53 IOPS, 31.56 MiB/s 00:22:19.833 Latency(us) 00:22:19.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.833 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:19.833 Verification LBA range: start 0x0 length 0x4000 00:22:19.833 NVMe0n1 : 15.01 8079.86 31.56 234.84 0.00 15364.28 595.78 17754.30 00:22:19.833 =================================================================================================================== 00:22:19.833 Total : 8079.86 31.56 234.84 0.00 15364.28 595.78 17754.30 00:22:19.833 Received shutdown signal, test time was about 15.000000 seconds 00:22:19.833 00:22:19.833 Latency(us) 00:22:19.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.833 =================================================================================================================== 00:22:19.833 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:19.833 01:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:19.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.833 01:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:19.833 01:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:19.833 01:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=81402 00:22:19.833 01:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 81402 /var/tmp/bdevperf.sock 00:22:19.833 01:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 81402 ']' 00:22:19.833 01:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:19.833 01:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.833 01:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.833 01:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.833 01:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.833 01:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:20.771 01:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:20.771 01:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:20.771 01:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:21.029 [2024-09-28 01:35:16.953287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:21.288 01:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:22:21.288 [2024-09-28 01:35:17.193542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:22:21.288 01:35:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:21.856 NVMe0n1 00:22:21.856 01:35:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:22.115 00:22:22.115 01:35:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:22.375 00:22:22.375 01:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:22.375 01:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:22.634 01:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:22.892 01:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:26.177 01:35:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:26.177 01:35:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:26.177 01:35:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=81479 00:22:26.177 01:35:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:26.177 01:35:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 81479 00:22:27.553 { 00:22:27.553 "results": [ 00:22:27.553 { 00:22:27.553 "job": "NVMe0n1", 00:22:27.553 "core_mask": "0x1", 00:22:27.553 "workload": "verify", 00:22:27.553 "status": "finished", 00:22:27.553 "verify_range": { 00:22:27.553 "start": 0, 00:22:27.553 "length": 16384 00:22:27.553 }, 00:22:27.553 "queue_depth": 128, 00:22:27.553 "io_size": 4096, 00:22:27.553 "runtime": 1.012443, 00:22:27.553 "iops": 6509.008408374595, 00:22:27.553 "mibps": 25.425814095213262, 00:22:27.553 "io_failed": 0, 00:22:27.553 "io_timeout": 0, 00:22:27.553 "avg_latency_us": 19593.051289833078, 00:22:27.553 "min_latency_us": 2591.650909090909, 00:22:27.553 "max_latency_us": 16801.04727272727 00:22:27.553 } 00:22:27.553 ], 00:22:27.553 "core_count": 1 00:22:27.553 } 00:22:27.553 01:35:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:27.553 [2024-09-28 01:35:15.738010] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:27.553 [2024-09-28 01:35:15.738200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81402 ] 00:22:27.553 [2024-09-28 01:35:15.906739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.553 [2024-09-28 01:35:16.067348] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.553 [2024-09-28 01:35:16.230454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:27.553 [2024-09-28 01:35:18.680447] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:22:27.553 [2024-09-28 01:35:18.680596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.553 [2024-09-28 01:35:18.680638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.553 [2024-09-28 01:35:18.680664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.553 [2024-09-28 01:35:18.680684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.553 [2024-09-28 01:35:18.680703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.553 [2024-09-28 01:35:18.680721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.553 [2024-09-28 01:35:18.680739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.553 [2024-09-28 01:35:18.680758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.553 [2024-09-28 01:35:18.680775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:27.553 [2024-09-28 01:35:18.680855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:27.553 [2024-09-28 01:35:18.680905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:27.553 [2024-09-28 01:35:18.688042] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:27.553 Running I/O for 1 seconds... 00:22:27.553 6462.00 IOPS, 25.24 MiB/s 00:22:27.553 Latency(us) 00:22:27.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.553 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:27.553 Verification LBA range: start 0x0 length 0x4000 00:22:27.553 NVMe0n1 : 1.01 6509.01 25.43 0.00 0.00 19593.05 2591.65 16801.05 00:22:27.553 =================================================================================================================== 00:22:27.553 Total : 6509.01 25.43 0.00 0.00 19593.05 2591.65 16801.05 00:22:27.553 01:35:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:27.553 01:35:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:27.554 01:35:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:27.812 01:35:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:27.812 01:35:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:28.379 01:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:28.638 01:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:31.924 01:35:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:31.924 01:35:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:31.924 01:35:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 81402 00:22:31.925 01:35:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 81402 ']' 00:22:31.925 01:35:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 81402 00:22:31.925 01:35:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:22:31.925 01:35:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:31.925 01:35:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81402 00:22:31.925 killing process with pid 81402 00:22:31.925 01:35:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:31.925 01:35:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:31.925 01:35:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81402' 00:22:31.925 01:35:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 81402 00:22:31.925 01:35:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 81402 00:22:32.862 01:35:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:32.862 01:35:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:33.121 01:35:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:33.121 01:35:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:33.121 01:35:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:33.121 01:35:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:33.121 01:35:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:33.121 01:35:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:33.121 01:35:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:33.121 01:35:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:33.121 01:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:33.121 rmmod nvme_tcp 00:22:33.121 rmmod nvme_fabrics 00:22:33.121 rmmod nvme_keyring 00:22:33.121 01:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:33.121 01:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:33.121 01:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:33.121 01:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 81141 ']' 00:22:33.121 01:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 81141 00:22:33.121 01:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 81141 ']' 00:22:33.121 01:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 81141 00:22:33.121 01:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:22:33.380 01:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.380 01:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81141 00:22:33.380 killing process with pid 81141 00:22:33.380 01:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:33.380 01:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:33.380 01:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81141' 00:22:33.380 01:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 81141 00:22:33.380 01:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 81141 00:22:34.317 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:34.317 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:34.317 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:34.317 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:34.317 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:22:34.317 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:34.317 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:22:34.317 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.317 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:34.317 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:34.317 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:34.317 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:34.317 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:22:34.576 00:22:34.576 real 0m36.054s 00:22:34.576 user 2m17.454s 00:22:34.576 sys 0m5.463s 00:22:34.576 ************************************ 00:22:34.576 END TEST nvmf_failover 00:22:34.576 ************************************ 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.576 ************************************ 00:22:34.576 START TEST nvmf_host_discovery 00:22:34.576 ************************************ 00:22:34.576 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:34.836 * Looking for test storage... 00:22:34.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:34.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.836 --rc genhtml_branch_coverage=1 00:22:34.836 --rc genhtml_function_coverage=1 00:22:34.836 --rc genhtml_legend=1 00:22:34.836 --rc geninfo_all_blocks=1 00:22:34.836 --rc geninfo_unexecuted_blocks=1 00:22:34.836 00:22:34.836 ' 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:34.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.836 --rc genhtml_branch_coverage=1 00:22:34.836 --rc genhtml_function_coverage=1 00:22:34.836 --rc genhtml_legend=1 00:22:34.836 --rc geninfo_all_blocks=1 00:22:34.836 --rc geninfo_unexecuted_blocks=1 00:22:34.836 00:22:34.836 ' 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:34.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.836 --rc genhtml_branch_coverage=1 00:22:34.836 --rc genhtml_function_coverage=1 00:22:34.836 --rc genhtml_legend=1 00:22:34.836 --rc geninfo_all_blocks=1 00:22:34.836 --rc geninfo_unexecuted_blocks=1 00:22:34.836 00:22:34.836 ' 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:34.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.836 --rc genhtml_branch_coverage=1 00:22:34.836 --rc genhtml_function_coverage=1 00:22:34.836 --rc genhtml_legend=1 00:22:34.836 --rc geninfo_all_blocks=1 00:22:34.836 --rc geninfo_unexecuted_blocks=1 00:22:34.836 00:22:34.836 ' 00:22:34.836 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:34.837 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:34.837 Cannot find device "nvmf_init_br" 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:34.837 Cannot find device "nvmf_init_br2" 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:34.837 Cannot find device "nvmf_tgt_br" 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:34.837 Cannot find device "nvmf_tgt_br2" 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:34.837 Cannot find device "nvmf_init_br" 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:34.837 Cannot find device "nvmf_init_br2" 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:22:34.837 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:35.097 Cannot find device "nvmf_tgt_br" 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:35.097 Cannot find device "nvmf_tgt_br2" 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:35.097 Cannot find device "nvmf_br" 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:35.097 Cannot find device "nvmf_init_if" 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:35.097 Cannot find device "nvmf_init_if2" 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:35.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:35.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:35.097 01:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:35.097 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:35.097 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:35.097 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:35.097 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:35.097 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:35.356 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:35.356 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:35.356 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:35.356 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:35.356 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:35.356 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:35.356 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:35.356 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:35.356 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:35.356 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:35.356 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:35.356 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:35.356 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:35.356 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:35.356 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:22:35.356 00:22:35.356 --- 10.0.0.3 ping statistics --- 00:22:35.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.356 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:22:35.356 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:35.356 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:35.356 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:22:35.356 00:22:35.356 --- 10.0.0.4 ping statistics --- 00:22:35.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.356 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:35.356 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:35.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:22:35.356 00:22:35.356 --- 10.0.0.1 ping statistics --- 00:22:35.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.356 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:35.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:22:35.357 00:22:35.357 --- 10.0.0.2 ping statistics --- 00:22:35.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.357 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # return 0 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=81818 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 81818 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 81818 ']' 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:35.357 01:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.357 [2024-09-28 01:35:31.233047] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:35.357 [2024-09-28 01:35:31.233430] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.616 [2024-09-28 01:35:31.391208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.616 [2024-09-28 01:35:31.538353] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.616 [2024-09-28 01:35:31.538661] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.616 [2024-09-28 01:35:31.538808] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.616 [2024-09-28 01:35:31.538933] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.616 [2024-09-28 01:35:31.538982] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.616 [2024-09-28 01:35:31.539132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.875 [2024-09-28 01:35:31.683117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:36.443 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:36.443 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:22:36.443 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:36.443 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:36.443 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.443 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.443 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:36.443 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.443 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.443 [2024-09-28 01:35:32.266651] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.443 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.443 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:36.443 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.444 [2024-09-28 01:35:32.274844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.444 null0 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.444 null1 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.444 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=81850 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 81850 /tmp/host.sock 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 81850 ']' 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:36.444 01:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.703 [2024-09-28 01:35:32.394828] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:36.703 [2024-09-28 01:35:32.394954] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81850 ] 00:22:36.703 [2024-09-28 01:35:32.548729] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.962 [2024-09-28 01:35:32.708837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.962 [2024-09-28 01:35:32.862713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:37.530 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.789 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:37.789 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:37.789 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.789 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.789 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.789 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:37.789 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:37.789 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:37.789 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.789 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:37.789 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:37.789 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.790 [2024-09-28 01:35:33.667371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:37.790 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:22:38.049 01:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:22:38.618 [2024-09-28 01:35:34.340956] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:38.618 [2024-09-28 01:35:34.340992] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:38.618 [2024-09-28 01:35:34.341025] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:38.618 [2024-09-28 01:35:34.347074] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:22:38.618 [2024-09-28 01:35:34.412751] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:38.618 [2024-09-28 01:35:34.412784] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:39.187 01:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.187 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.448 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.449 [2024-09-28 01:35:35.229336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:39.449 [2024-09-28 01:35:35.230355] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:39.449 [2024-09-28 01:35:35.230411] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:39.449 [2024-09-28 01:35:35.236393] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:22:39.449 [2024-09-28 01:35:35.295000] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:39.449 [2024-09-28 01:35:35.295072] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:39.449 [2024-09-28 01:35:35.295085] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:39.449 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:39.450 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:39.450 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:39.450 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:39.450 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:22:39.450 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:39.450 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.450 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.450 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:39.450 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:39.450 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:39.450 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.767 [2024-09-28 01:35:35.466778] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:39.767 [2024-09-28 01:35:35.466853] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:39.767 [2024-09-28 01:35:35.470387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.767 [2024-09-28 01:35:35.470433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.767 [2024-09-28 01:35:35.470496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.767 [2024-09-28 01:35:35.470522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.767 [2024-09-28 01:35:35.470562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.767 [2024-09-28 01:35:35.470583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.767 [2024-09-28 01:35:35.470604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.767 [2024-09-28 01:35:35.470624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.767 [2024-09-28 01:35:35.470644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:39.767 [2024-09-28 01:35:35.472827] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:22:39.767 [2024-09-28 01:35:35.472916] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:39.767 [2024-09-28 01:35:35.473010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:39.767 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:39.768 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.056 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:40.056 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:40.056 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:22:40.056 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.057 01:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.993 [2024-09-28 01:35:36.882148] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:40.993 [2024-09-28 01:35:36.882180] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:40.993 [2024-09-28 01:35:36.882216] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:40.993 [2024-09-28 01:35:36.888212] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:22:41.251 [2024-09-28 01:35:36.958068] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:41.251 [2024-09-28 01:35:36.958117] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.251 request: 00:22:41.251 { 00:22:41.251 "name": "nvme", 00:22:41.251 "trtype": "tcp", 00:22:41.251 "traddr": "10.0.0.3", 00:22:41.251 "adrfam": "ipv4", 00:22:41.251 "trsvcid": "8009", 00:22:41.251 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:41.251 "wait_for_attach": true, 00:22:41.251 "method": "bdev_nvme_start_discovery", 00:22:41.251 "req_id": 1 00:22:41.251 } 00:22:41.251 Got JSON-RPC error response 00:22:41.251 response: 00:22:41.251 { 00:22:41.251 "code": -17, 00:22:41.251 "message": "File exists" 00:22:41.251 } 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:41.251 01:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.251 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:41.251 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:41.251 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:41.251 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.251 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:41.251 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:41.251 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.251 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.251 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.251 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:41.251 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:41.251 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.252 request: 00:22:41.252 { 00:22:41.252 "name": "nvme_second", 00:22:41.252 "trtype": "tcp", 00:22:41.252 "traddr": "10.0.0.3", 00:22:41.252 "adrfam": "ipv4", 00:22:41.252 "trsvcid": "8009", 00:22:41.252 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:41.252 "wait_for_attach": true, 00:22:41.252 "method": "bdev_nvme_start_discovery", 00:22:41.252 "req_id": 1 00:22:41.252 } 00:22:41.252 Got JSON-RPC error response 00:22:41.252 response: 00:22:41.252 { 00:22:41.252 "code": -17, 00:22:41.252 "message": "File exists" 00:22:41.252 } 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:41.252 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:41.510 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.510 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:41.510 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:41.510 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:22:41.510 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:41.510 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:41.510 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.510 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:41.510 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.510 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:41.510 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.510 01:35:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.446 [2024-09-28 01:35:38.214754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.446 [2024-09-28 01:35:38.215002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c900 with addr=10.0.0.3, port=8010 00:22:42.446 [2024-09-28 01:35:38.215136] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:42.446 [2024-09-28 01:35:38.215164] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:42.446 [2024-09-28 01:35:38.215187] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:22:43.382 [2024-09-28 01:35:39.214759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.382 [2024-09-28 01:35:39.214986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002cb80 with addr=10.0.0.3, port=8010 00:22:43.382 [2024-09-28 01:35:39.215105] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:43.382 [2024-09-28 01:35:39.215132] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:43.382 [2024-09-28 01:35:39.215153] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:22:44.318 [2024-09-28 01:35:40.214565] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:22:44.318 request: 00:22:44.318 { 00:22:44.318 "name": "nvme_second", 00:22:44.318 "trtype": "tcp", 00:22:44.318 "traddr": "10.0.0.3", 00:22:44.318 "adrfam": "ipv4", 00:22:44.318 "trsvcid": "8010", 00:22:44.318 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:44.318 "wait_for_attach": false, 00:22:44.318 "attach_timeout_ms": 3000, 00:22:44.318 "method": "bdev_nvme_start_discovery", 00:22:44.318 "req_id": 1 00:22:44.318 } 00:22:44.318 Got JSON-RPC error response 00:22:44.318 response: 00:22:44.318 { 00:22:44.318 "code": -110, 00:22:44.318 "message": "Connection timed out" 00:22:44.318 } 00:22:44.318 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:44.318 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:22:44.318 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:44.318 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:44.318 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:44.318 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:44.318 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:44.318 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:44.318 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.318 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.318 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:44.318 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:44.318 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 81850 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:44.577 rmmod nvme_tcp 00:22:44.577 rmmod nvme_fabrics 00:22:44.577 rmmod nvme_keyring 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 81818 ']' 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 81818 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 81818 ']' 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 81818 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81818 00:22:44.577 killing process with pid 81818 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81818' 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 81818 00:22:44.577 01:35:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 81818 00:22:45.513 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:45.513 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:45.513 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:45.513 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:22:45.513 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:22:45.513 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:22:45.513 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:45.513 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:45.513 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:45.513 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:45.513 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:45.513 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:45.513 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:45.771 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:45.771 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:45.771 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:45.771 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:45.771 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:45.771 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:45.771 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:45.771 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:45.771 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:45.771 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:45.771 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.771 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.771 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.771 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:22:45.771 00:22:45.771 real 0m11.158s 00:22:45.771 user 0m20.805s 00:22:45.771 sys 0m2.030s 00:22:45.772 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:45.772 ************************************ 00:22:45.772 END TEST nvmf_host_discovery 00:22:45.772 ************************************ 00:22:45.772 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.772 01:35:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:45.772 01:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:45.772 01:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:45.772 01:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.772 ************************************ 00:22:45.772 START TEST nvmf_host_multipath_status 00:22:45.772 ************************************ 00:22:45.772 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:46.031 * Looking for test storage... 00:22:46.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:46.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.031 --rc genhtml_branch_coverage=1 00:22:46.031 --rc genhtml_function_coverage=1 00:22:46.031 --rc genhtml_legend=1 00:22:46.031 --rc geninfo_all_blocks=1 00:22:46.031 --rc geninfo_unexecuted_blocks=1 00:22:46.031 00:22:46.031 ' 00:22:46.031 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:46.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.032 --rc genhtml_branch_coverage=1 00:22:46.032 --rc genhtml_function_coverage=1 00:22:46.032 --rc genhtml_legend=1 00:22:46.032 --rc geninfo_all_blocks=1 00:22:46.032 --rc geninfo_unexecuted_blocks=1 00:22:46.032 00:22:46.032 ' 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:46.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.032 --rc genhtml_branch_coverage=1 00:22:46.032 --rc genhtml_function_coverage=1 00:22:46.032 --rc genhtml_legend=1 00:22:46.032 --rc geninfo_all_blocks=1 00:22:46.032 --rc geninfo_unexecuted_blocks=1 00:22:46.032 00:22:46.032 ' 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:46.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.032 --rc genhtml_branch_coverage=1 00:22:46.032 --rc genhtml_function_coverage=1 00:22:46.032 --rc genhtml_legend=1 00:22:46.032 --rc geninfo_all_blocks=1 00:22:46.032 --rc geninfo_unexecuted_blocks=1 00:22:46.032 00:22:46.032 ' 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:46.032 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # nvmf_veth_init 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:46.032 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:46.033 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:46.033 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:46.033 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:46.033 Cannot find device "nvmf_init_br" 00:22:46.033 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:22:46.033 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:46.033 Cannot find device "nvmf_init_br2" 00:22:46.033 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:22:46.033 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:46.033 Cannot find device "nvmf_tgt_br" 00:22:46.033 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:22:46.033 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:46.291 Cannot find device "nvmf_tgt_br2" 00:22:46.291 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:22:46.291 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:46.291 Cannot find device "nvmf_init_br" 00:22:46.291 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:22:46.291 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:46.291 Cannot find device "nvmf_init_br2" 00:22:46.291 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:22:46.291 01:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:46.291 Cannot find device "nvmf_tgt_br" 00:22:46.291 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:22:46.291 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:46.291 Cannot find device "nvmf_tgt_br2" 00:22:46.291 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:22:46.291 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:46.291 Cannot find device "nvmf_br" 00:22:46.291 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:22:46.291 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:46.291 Cannot find device "nvmf_init_if" 00:22:46.291 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:22:46.291 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:46.291 Cannot find device "nvmf_init_if2" 00:22:46.291 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:22:46.291 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:46.291 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:46.291 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:22:46.291 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:46.291 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:46.291 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:22:46.291 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:46.291 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:46.292 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:46.551 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:46.551 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:22:46.551 00:22:46.551 --- 10.0.0.3 ping statistics --- 00:22:46.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.551 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:46.551 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:46.551 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:22:46.551 00:22:46.551 --- 10.0.0.4 ping statistics --- 00:22:46.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.551 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:46.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:22:46.551 00:22:46.551 --- 10.0.0.1 ping statistics --- 00:22:46.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.551 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:46.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:22:46.551 00:22:46.551 --- 10.0.0.2 ping statistics --- 00:22:46.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.551 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # return 0 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=82366 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 82366 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 82366 ']' 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:46.551 01:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:46.551 [2024-09-28 01:35:42.435941] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:46.551 [2024-09-28 01:35:42.436290] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.810 [2024-09-28 01:35:42.611716] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:47.069 [2024-09-28 01:35:42.844486] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.069 [2024-09-28 01:35:42.844774] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.069 [2024-09-28 01:35:42.844927] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.069 [2024-09-28 01:35:42.845038] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.069 [2024-09-28 01:35:42.845152] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.069 [2024-09-28 01:35:42.845551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.069 [2024-09-28 01:35:42.845568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.328 [2024-09-28 01:35:43.028287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:47.586 01:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:47.586 01:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:22:47.586 01:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:47.587 01:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:47.587 01:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:47.587 01:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.587 01:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=82366 00:22:47.587 01:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:47.845 [2024-09-28 01:35:43.750790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.104 01:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:48.104 Malloc0 00:22:48.363 01:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:48.363 01:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:48.622 01:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:48.881 [2024-09-28 01:35:44.752125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:48.881 01:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:49.139 [2024-09-28 01:35:45.036146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:49.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.139 01:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=82417 00:22:49.139 01:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:49.139 01:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.139 01:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 82417 /var/tmp/bdevperf.sock 00:22:49.139 01:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 82417 ']' 00:22:49.139 01:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.139 01:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:49.139 01:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.139 01:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:49.139 01:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:50.515 01:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:50.515 01:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:22:50.515 01:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:50.515 01:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:50.773 Nvme0n1 00:22:50.773 01:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:51.341 Nvme0n1 00:22:51.341 01:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:51.341 01:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:53.244 01:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:53.244 01:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:22:53.503 01:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:53.761 01:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:54.699 01:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:54.699 01:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:54.699 01:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.699 01:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:54.958 01:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.958 01:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:54.958 01:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.958 01:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:55.217 01:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:55.217 01:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:55.217 01:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:55.217 01:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.476 01:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.476 01:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:55.476 01:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.476 01:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:55.735 01:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.735 01:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:55.735 01:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:55.735 01:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.994 01:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.994 01:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:55.994 01:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.994 01:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:56.252 01:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.252 01:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:56.252 01:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:56.511 01:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:56.770 01:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:57.707 01:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:57.707 01:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:57.707 01:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.707 01:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:57.966 01:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:57.966 01:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:57.966 01:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:57.966 01:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.225 01:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.225 01:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:58.226 01:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.226 01:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:58.485 01:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.485 01:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:58.485 01:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:58.485 01:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.743 01:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.743 01:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:58.743 01:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:58.743 01:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.002 01:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.002 01:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:59.002 01:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:59.002 01:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.262 01:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.262 01:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:59.262 01:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:59.521 01:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:22:59.780 01:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:00.749 01:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:00.749 01:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:00.749 01:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.749 01:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:01.009 01:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.009 01:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:01.009 01:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.009 01:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:01.269 01:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:01.269 01:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:01.269 01:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:01.269 01:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.528 01:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.528 01:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:01.528 01:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.528 01:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:01.787 01:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.787 01:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:01.787 01:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:01.787 01:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.046 01:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.046 01:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:02.046 01:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.046 01:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:02.305 01:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.305 01:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:02.305 01:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:02.563 01:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:02.822 01:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:03.759 01:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:03.759 01:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:03.759 01:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.759 01:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:04.018 01:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.018 01:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:04.018 01:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.018 01:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:04.277 01:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:04.277 01:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:04.277 01:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:04.277 01:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.536 01:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.536 01:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:04.536 01:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.536 01:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:04.794 01:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.794 01:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:04.794 01:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:04.794 01:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.054 01:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.054 01:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:05.054 01:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:05.054 01:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.313 01:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:05.313 01:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:05.313 01:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:05.572 01:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:05.831 01:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:06.767 01:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:06.767 01:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:06.767 01:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.767 01:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:07.026 01:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:07.026 01:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:07.026 01:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.026 01:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:07.285 01:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:07.285 01:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:07.285 01:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.285 01:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:07.544 01:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.544 01:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:07.544 01:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.544 01:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:07.803 01:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.803 01:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:07.803 01:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.803 01:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:08.062 01:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:08.062 01:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:08.062 01:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:08.062 01:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.321 01:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:08.321 01:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:08.321 01:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:08.579 01:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:08.838 01:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:09.775 01:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:09.775 01:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:09.775 01:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.775 01:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:10.038 01:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:10.038 01:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:10.038 01:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.038 01:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:10.297 01:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.297 01:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:10.297 01:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.297 01:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:10.556 01:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.556 01:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:10.556 01:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.556 01:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:10.814 01:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.814 01:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:10.814 01:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.814 01:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:11.073 01:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:11.073 01:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:11.073 01:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.073 01:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:11.332 01:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.332 01:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:11.591 01:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:11.591 01:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:23:11.591 01:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:11.850 01:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:13.225 01:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:13.225 01:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:13.225 01:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.225 01:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:13.225 01:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.225 01:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:13.225 01:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.225 01:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:13.485 01:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.485 01:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:13.485 01:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.485 01:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:13.744 01:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.744 01:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:13.744 01:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.744 01:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:14.004 01:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.004 01:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:14.004 01:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.004 01:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:14.263 01:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.264 01:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:14.264 01:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.264 01:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:14.523 01:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.523 01:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:14.523 01:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:14.782 01:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:15.041 01:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:15.978 01:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:15.978 01:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:15.978 01:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.978 01:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:16.237 01:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:16.237 01:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:16.237 01:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.237 01:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:16.496 01:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.496 01:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:16.496 01:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.496 01:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:16.756 01:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.756 01:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:16.756 01:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.756 01:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:17.014 01:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.014 01:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:17.015 01:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.015 01:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:17.274 01:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.274 01:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:17.274 01:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.274 01:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:17.533 01:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.533 01:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:17.533 01:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:17.792 01:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:23:18.051 01:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:18.988 01:36:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:18.988 01:36:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:18.988 01:36:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.988 01:36:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:19.555 01:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.555 01:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:19.555 01:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.555 01:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:19.555 01:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.555 01:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:19.555 01:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.555 01:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:19.826 01:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.826 01:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:19.826 01:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:19.826 01:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.196 01:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.196 01:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:20.196 01:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:20.196 01:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.466 01:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.466 01:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:20.466 01:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.466 01:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:20.724 01:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.724 01:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:20.724 01:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:20.983 01:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:21.242 01:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:22.180 01:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:22.180 01:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:22.180 01:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.180 01:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:22.439 01:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.439 01:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:22.439 01:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.439 01:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:22.699 01:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:22.699 01:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:22.699 01:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:22.699 01:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.959 01:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.959 01:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:22.959 01:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:22.959 01:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.218 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.218 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:23.218 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.218 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:23.477 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.477 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:23.477 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.477 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:23.736 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:23.736 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 82417 00:23:23.736 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 82417 ']' 00:23:23.736 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 82417 00:23:23.736 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:23:23.736 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:23.736 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82417 00:23:23.736 killing process with pid 82417 00:23:23.736 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:23.736 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:23.736 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82417' 00:23:23.736 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 82417 00:23:23.736 01:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 82417 00:23:23.736 { 00:23:23.736 "results": [ 00:23:23.736 { 00:23:23.736 "job": "Nvme0n1", 00:23:23.736 "core_mask": "0x4", 00:23:23.736 "workload": "verify", 00:23:23.736 "status": "terminated", 00:23:23.736 "verify_range": { 00:23:23.736 "start": 0, 00:23:23.736 "length": 16384 00:23:23.736 }, 00:23:23.736 "queue_depth": 128, 00:23:23.736 "io_size": 4096, 00:23:23.736 "runtime": 32.527983, 00:23:23.736 "iops": 7846.7207757702035, 00:23:23.736 "mibps": 30.651253030352358, 00:23:23.736 "io_failed": 0, 00:23:23.736 "io_timeout": 0, 00:23:23.736 "avg_latency_us": 16285.026019579585, 00:23:23.736 "min_latency_us": 465.45454545454544, 00:23:23.736 "max_latency_us": 4057035.869090909 00:23:23.736 } 00:23:23.736 ], 00:23:23.736 "core_count": 1 00:23:23.736 } 00:23:24.683 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 82417 00:23:24.683 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:24.683 [2024-09-28 01:35:45.163551] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:24.683 [2024-09-28 01:35:45.163723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82417 ] 00:23:24.683 [2024-09-28 01:35:45.337558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.683 [2024-09-28 01:35:45.552813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.683 [2024-09-28 01:35:45.706899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:24.683 [2024-09-28 01:35:46.944591] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:23:24.683 Running I/O for 90 seconds... 00:23:24.683 8217.00 IOPS, 32.10 MiB/s 8356.50 IOPS, 32.64 MiB/s 8339.00 IOPS, 32.57 MiB/s 8340.25 IOPS, 32.58 MiB/s 8331.40 IOPS, 32.54 MiB/s 8347.17 IOPS, 32.61 MiB/s 8373.29 IOPS, 32.71 MiB/s 8370.62 IOPS, 32.70 MiB/s 8382.11 IOPS, 32.74 MiB/s 8388.70 IOPS, 32.77 MiB/s 8385.00 IOPS, 32.75 MiB/s 8387.92 IOPS, 32.77 MiB/s 8380.23 IOPS, 32.74 MiB/s 8366.21 IOPS, 32.68 MiB/s [2024-09-28 01:36:01.256333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.683 [2024-09-28 01:36:01.256399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:24.683 [2024-09-28 01:36:01.256489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.683 [2024-09-28 01:36:01.256526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.683 [2024-09-28 01:36:01.256560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.683 [2024-09-28 01:36:01.256582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.256609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.684 [2024-09-28 01:36:01.256629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.256655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.684 [2024-09-28 01:36:01.256675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.256701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.684 [2024-09-28 01:36:01.256724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.256756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.256777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.256804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.256825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.256866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.256905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.256938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.256962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.256990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.257964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.257994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.684 [2024-09-28 01:36:01.258019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.258048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.684 [2024-09-28 01:36:01.258068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.259742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.684 [2024-09-28 01:36:01.259796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.259839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.684 [2024-09-28 01:36:01.259883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.259917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.684 [2024-09-28 01:36:01.259937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.259964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.684 [2024-09-28 01:36:01.259985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.260011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.684 [2024-09-28 01:36:01.260031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.260057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.684 [2024-09-28 01:36:01.260078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.260113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.684 [2024-09-28 01:36:01.260134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.260161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.684 [2024-09-28 01:36:01.260181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.260207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.260227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:24.684 [2024-09-28 01:36:01.260254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.684 [2024-09-28 01:36:01.260274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.260318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.685 [2024-09-28 01:36:01.260341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.260368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.685 [2024-09-28 01:36:01.260388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.260414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.685 [2024-09-28 01:36:01.260433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.260484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.685 [2024-09-28 01:36:01.260509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.260562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.685 [2024-09-28 01:36:01.260584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.260612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.685 [2024-09-28 01:36:01.260633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.261381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.261424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.261474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.261498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.261542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.261563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.261591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.261618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.261648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.261668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.261695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.261715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.261742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.261763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.261790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.261810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.262692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.262731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.262788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.262812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.262841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.262863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.262890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.262911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.262937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.262964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.262994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.263045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.263077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.263100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.263130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.263152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.263191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.685 [2024-09-28 01:36:01.263215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.263245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.685 [2024-09-28 01:36:01.263267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.263309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.685 [2024-09-28 01:36:01.263361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.263410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.685 [2024-09-28 01:36:01.263432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.263470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.685 [2024-09-28 01:36:01.263490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.263517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.685 [2024-09-28 01:36:01.263565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.263598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.685 [2024-09-28 01:36:01.263626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.263655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.685 [2024-09-28 01:36:01.263675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.265292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.265331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.265368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.265389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.265416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.265437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.265478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.265506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.265534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.265554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.265580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.265600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.265626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.265646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.265671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.265691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:24.685 [2024-09-28 01:36:01.265724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.685 [2024-09-28 01:36:01.265746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.265772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.686 [2024-09-28 01:36:01.265803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.265832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.686 [2024-09-28 01:36:01.265852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.265878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.686 [2024-09-28 01:36:01.265899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.265929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.686 [2024-09-28 01:36:01.265950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.265977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.686 [2024-09-28 01:36:01.265997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.686 [2024-09-28 01:36:01.266042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.686 [2024-09-28 01:36:01.266086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.266139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.266187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.266232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.266277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.266322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.266388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.266495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.266549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.266606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.266653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.266701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.266749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.266806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.266885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.266930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.266955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.266978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.267036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.267061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.267089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.267111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.267150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.267173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.267209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.267232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.267260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.267281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.267308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.267359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.267403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.267426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.267453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.686 [2024-09-28 01:36:01.267474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.268797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.686 [2024-09-28 01:36:01.268838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.268878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.686 [2024-09-28 01:36:01.268908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.268939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.686 [2024-09-28 01:36:01.268959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.268985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.686 [2024-09-28 01:36:01.269005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.269032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.686 [2024-09-28 01:36:01.269052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.269078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.686 [2024-09-28 01:36:01.269098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.269132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.686 [2024-09-28 01:36:01.269168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:24.686 [2024-09-28 01:36:01.269198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.686 [2024-09-28 01:36:01.269218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.269245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.269265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.269291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.269312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.269347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.269368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.269394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.269413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.269439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.269474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.269502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.269524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.269559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.269579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.269607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.269628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.270943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.687 [2024-09-28 01:36:01.270982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.271067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.687 [2024-09-28 01:36:01.271100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.271131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.687 [2024-09-28 01:36:01.271160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.271197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.687 [2024-09-28 01:36:01.271220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.271254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.687 [2024-09-28 01:36:01.271279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.271308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.687 [2024-09-28 01:36:01.271330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.271388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.271422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.271449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.271476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.271503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.271540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.271571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.271591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.271618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.271639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.271667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.271692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.271720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.271740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.271766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.271786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.271812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.271832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.271870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.271897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.271925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.271946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.271972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.271992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.272019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.272039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.272065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.272087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.272120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.272141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.272168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.272188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.272214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.272234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.272260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.272280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.272313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.272334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.272360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.272381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.272407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.272427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.272495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.272526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.272556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.272577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.272603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.687 [2024-09-28 01:36:01.272640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:24.687 [2024-09-28 01:36:01.272669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.687 [2024-09-28 01:36:01.272691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.272721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.272746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.273777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.273819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.273858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.273880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.273907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.273928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.273954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.273974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.274031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.274078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.274125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.274186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.688 [2024-09-28 01:36:01.274265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.688 [2024-09-28 01:36:01.274321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.688 [2024-09-28 01:36:01.274369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.688 [2024-09-28 01:36:01.274416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.688 [2024-09-28 01:36:01.274498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.688 [2024-09-28 01:36:01.274554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.688 [2024-09-28 01:36:01.274618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.688 [2024-09-28 01:36:01.274666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.274744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.274796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.274844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.274903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.274962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.274989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.275039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.275071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.275092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.275123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.275149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.275183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.275207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.275235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.275256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.275284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.275305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.275336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.275362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.275405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.275425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.275452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.275488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:24.688 [2024-09-28 01:36:01.275522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.688 [2024-09-28 01:36:01.275545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.275578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.689 [2024-09-28 01:36:01.275599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.275638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.275660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.275689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.275710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.275738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.275764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.275795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.275816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.275844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.275864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.275891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.275912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.275939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.275960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.275996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.276017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.276521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.689 [2024-09-28 01:36:01.276556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.276605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.689 [2024-09-28 01:36:01.276637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.276666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.689 [2024-09-28 01:36:01.276688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.276716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.689 [2024-09-28 01:36:01.276737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.276779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.689 [2024-09-28 01:36:01.276801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.276837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.689 [2024-09-28 01:36:01.276859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.276887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.689 [2024-09-28 01:36:01.276908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.276935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.689 [2024-09-28 01:36:01.276956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.276983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.689 [2024-09-28 01:36:01.277004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.689 [2024-09-28 01:36:01.277061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.689 [2024-09-28 01:36:01.277110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.689 [2024-09-28 01:36:01.277159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.689 [2024-09-28 01:36:01.277207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.689 [2024-09-28 01:36:01.277262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.689 [2024-09-28 01:36:01.277312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.689 [2024-09-28 01:36:01.277360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.277418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.277491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.277541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.277590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.277637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.277707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.277764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.277814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.277864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.277919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.689 [2024-09-28 01:36:01.277968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:24.689 [2024-09-28 01:36:01.277995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.278015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.278072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.278132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.278181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.278228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.278276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.278332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.278381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.278429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.278493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.278553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.278601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.278649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.690 [2024-09-28 01:36:01.278741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.690 [2024-09-28 01:36:01.278796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.690 [2024-09-28 01:36:01.278844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.690 [2024-09-28 01:36:01.278892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.690 [2024-09-28 01:36:01.278945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.278975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.690 [2024-09-28 01:36:01.279011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.279063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.690 [2024-09-28 01:36:01.279086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.279115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.690 [2024-09-28 01:36:01.279139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.279175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.279197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.279225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.279246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.279275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.279297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.279325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.279363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.279411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.279449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.279488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.279525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.279557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.279581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.279614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.690 [2024-09-28 01:36:01.279636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.279665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.690 [2024-09-28 01:36:01.279686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.279715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.690 [2024-09-28 01:36:01.279736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.279779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.690 [2024-09-28 01:36:01.279806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:24.690 [2024-09-28 01:36:01.279836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.691 [2024-09-28 01:36:01.279857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.279883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.691 [2024-09-28 01:36:01.279904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.279932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.691 [2024-09-28 01:36:01.279955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.279984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.280958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.280985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.281006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.281032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.281053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.281088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.281109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.281136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.281157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.281183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.281218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.281246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.691 [2024-09-28 01:36:01.281267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.281304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.691 [2024-09-28 01:36:01.281328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.281377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.691 [2024-09-28 01:36:01.281402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.281431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.691 [2024-09-28 01:36:01.281496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.281531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.691 [2024-09-28 01:36:01.281552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.281579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.691 [2024-09-28 01:36:01.281599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.281627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.691 [2024-09-28 01:36:01.281648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.281675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.691 [2024-09-28 01:36:01.281698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.281732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.691 [2024-09-28 01:36:01.281753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.281780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.691 [2024-09-28 01:36:01.281802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.281828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.281848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.281875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.281896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:24.691 [2024-09-28 01:36:01.281930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.691 [2024-09-28 01:36:01.281952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.281979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.692 [2024-09-28 01:36:01.281999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.692 [2024-09-28 01:36:01.282047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.692 [2024-09-28 01:36:01.282093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.692 [2024-09-28 01:36:01.282162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.692 [2024-09-28 01:36:01.282212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:01.282259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:01.282307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:01.282361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:01.282410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:01.282471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:01.282521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:01.282575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:01.282624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:01.282670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:01.282718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:01.282784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:01.282832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:01.282880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:01.282926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.282953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:01.282981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.283051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:01.283077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.283107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.692 [2024-09-28 01:36:01.283129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.283159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.692 [2024-09-28 01:36:01.283187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.283219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.692 [2024-09-28 01:36:01.283241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.283269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.692 [2024-09-28 01:36:01.283291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.283320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.692 [2024-09-28 01:36:01.283341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.283387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.692 [2024-09-28 01:36:01.283427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.283992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.692 [2024-09-28 01:36:01.284042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:01.284111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.692 [2024-09-28 01:36:01.284138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:24.692 7902.87 IOPS, 30.87 MiB/s 7408.94 IOPS, 28.94 MiB/s 6973.12 IOPS, 27.24 MiB/s 6585.72 IOPS, 25.73 MiB/s 6588.74 IOPS, 25.74 MiB/s 6674.50 IOPS, 26.07 MiB/s 6835.90 IOPS, 26.70 MiB/s 7024.86 IOPS, 27.44 MiB/s 7198.91 IOPS, 28.12 MiB/s 7306.12 IOPS, 28.54 MiB/s 7355.16 IOPS, 28.73 MiB/s 7389.65 IOPS, 28.87 MiB/s 7442.81 IOPS, 29.07 MiB/s 7590.14 IOPS, 29.65 MiB/s 7709.07 IOPS, 30.11 MiB/s [2024-09-28 01:36:16.921493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:16.921560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:16.921633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.692 [2024-09-28 01:36:16.921661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:16.921692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.692 [2024-09-28 01:36:16.921713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:24.692 [2024-09-28 01:36:16.921741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.921761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.921789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.921809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.921861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.921884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.921911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.921931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.921958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.921978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.922024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.922094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.922140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.922185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.922231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.922276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.922323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.922368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.922413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.922492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.922541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.922588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.922635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.922693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.922744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.922792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.922854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.922901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.922947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.922973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.922993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.923052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.923077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.923108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.923131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.923164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.923187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.923218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.923241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.923272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.923295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.923326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.923402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.923431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.923451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.923477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.923497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.923542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.923562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.923589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.923609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.923637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.693 [2024-09-28 01:36:16.923656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.923683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.923702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.923728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.923748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.923774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.693 [2024-09-28 01:36:16.923794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:24.693 [2024-09-28 01:36:16.923821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.923840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.923866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.923887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.923913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.694 [2024-09-28 01:36:16.923932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.923958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.694 [2024-09-28 01:36:16.923977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.694 [2024-09-28 01:36:16.924036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.694 [2024-09-28 01:36:16.924082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.924127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.924174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.924228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.924274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.924320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.924366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.694 [2024-09-28 01:36:16.924412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.694 [2024-09-28 01:36:16.924488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.694 [2024-09-28 01:36:16.924538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.924585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.924644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.924693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.924740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.694 [2024-09-28 01:36:16.924788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.694 [2024-09-28 01:36:16.924836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.924878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.694 [2024-09-28 01:36:16.924898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.926400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.694 [2024-09-28 01:36:16.926438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.926492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.694 [2024-09-28 01:36:16.926514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.926541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.694 [2024-09-28 01:36:16.926561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.926589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.694 [2024-09-28 01:36:16.926609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.926635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.694 [2024-09-28 01:36:16.926654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.926700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.926722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.926749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.926781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.926810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.926830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.926856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.926876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.926902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.926921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.926947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.926966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:24.694 [2024-09-28 01:36:16.926993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.694 [2024-09-28 01:36:16.927044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:24.694 7813.80 IOPS, 30.52 MiB/s 7834.00 IOPS, 30.60 MiB/s 7842.94 IOPS, 30.64 MiB/s Received shutdown signal, test time was about 32.528880 seconds 00:23:24.694 00:23:24.694 Latency(us) 00:23:24.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.694 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:24.694 Verification LBA range: start 0x0 length 0x4000 00:23:24.694 Nvme0n1 : 32.53 7846.72 30.65 0.00 0.00 16285.03 465.45 4057035.87 00:23:24.694 =================================================================================================================== 00:23:24.694 Total : 7846.72 30.65 0.00 0.00 16285.03 465.45 4057035.87 00:23:24.694 [2024-09-28 01:36:19.610231] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:23:24.694 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:24.954 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:24.954 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:24.954 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:24.954 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:24.954 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:24.954 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:24.954 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:24.954 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:24.954 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:25.213 rmmod nvme_tcp 00:23:25.213 rmmod nvme_fabrics 00:23:25.213 rmmod nvme_keyring 00:23:25.213 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:25.213 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:25.213 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:25.213 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 82366 ']' 00:23:25.213 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 82366 00:23:25.213 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 82366 ']' 00:23:25.213 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 82366 00:23:25.213 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:23:25.213 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:25.213 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82366 00:23:25.213 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:25.213 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:25.213 killing process with pid 82366 00:23:25.213 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82366' 00:23:25.213 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 82366 00:23:25.213 01:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 82366 00:23:26.150 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:26.150 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:26.150 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:26.150 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:26.150 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:23:26.150 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:26.150 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:23:26.150 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:26.150 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:26.150 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:26.150 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:26.150 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:26.409 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:26.409 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:26.409 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:26.409 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:26.409 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:26.409 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:26.409 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:26.409 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:26.409 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:26.409 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:26.410 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:26.410 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.410 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.410 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.410 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:23:26.410 00:23:26.410 real 0m40.605s 00:23:26.410 user 2m8.476s 00:23:26.410 sys 0m10.692s 00:23:26.410 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:26.410 01:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:26.410 ************************************ 00:23:26.410 END TEST nvmf_host_multipath_status 00:23:26.410 ************************************ 00:23:26.410 01:36:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:26.410 01:36:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:26.410 01:36:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:26.410 01:36:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.671 ************************************ 00:23:26.671 START TEST nvmf_discovery_remove_ifc 00:23:26.671 ************************************ 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:26.671 * Looking for test storage... 00:23:26.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:26.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.671 --rc genhtml_branch_coverage=1 00:23:26.671 --rc genhtml_function_coverage=1 00:23:26.671 --rc genhtml_legend=1 00:23:26.671 --rc geninfo_all_blocks=1 00:23:26.671 --rc geninfo_unexecuted_blocks=1 00:23:26.671 00:23:26.671 ' 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:26.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.671 --rc genhtml_branch_coverage=1 00:23:26.671 --rc genhtml_function_coverage=1 00:23:26.671 --rc genhtml_legend=1 00:23:26.671 --rc geninfo_all_blocks=1 00:23:26.671 --rc geninfo_unexecuted_blocks=1 00:23:26.671 00:23:26.671 ' 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:26.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.671 --rc genhtml_branch_coverage=1 00:23:26.671 --rc genhtml_function_coverage=1 00:23:26.671 --rc genhtml_legend=1 00:23:26.671 --rc geninfo_all_blocks=1 00:23:26.671 --rc geninfo_unexecuted_blocks=1 00:23:26.671 00:23:26.671 ' 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:26.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.671 --rc genhtml_branch_coverage=1 00:23:26.671 --rc genhtml_function_coverage=1 00:23:26.671 --rc genhtml_legend=1 00:23:26.671 --rc geninfo_all_blocks=1 00:23:26.671 --rc geninfo_unexecuted_blocks=1 00:23:26.671 00:23:26.671 ' 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.671 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:26.672 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:26.672 Cannot find device "nvmf_init_br" 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:26.672 Cannot find device "nvmf_init_br2" 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:26.672 Cannot find device "nvmf_tgt_br" 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:26.672 Cannot find device "nvmf_tgt_br2" 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:23:26.672 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:26.931 Cannot find device "nvmf_init_br" 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:26.931 Cannot find device "nvmf_init_br2" 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:26.931 Cannot find device "nvmf_tgt_br" 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:26.931 Cannot find device "nvmf_tgt_br2" 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:26.931 Cannot find device "nvmf_br" 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:26.931 Cannot find device "nvmf_init_if" 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:26.931 Cannot find device "nvmf_init_if2" 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:26.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:26.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:26.931 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:27.190 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:27.190 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:27.190 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:27.190 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:27.190 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:27.190 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:27.190 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:27.190 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:27.190 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:27.190 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:27.190 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:23:27.190 00:23:27.190 --- 10.0.0.3 ping statistics --- 00:23:27.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.190 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:27.190 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:27.190 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:27.190 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:23:27.190 00:23:27.190 --- 10.0.0.4 ping statistics --- 00:23:27.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.191 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:27.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:23:27.191 00:23:27.191 --- 10.0.0.1 ping statistics --- 00:23:27.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.191 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:27.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:23:27.191 00:23:27.191 --- 10.0.0.2 ping statistics --- 00:23:27.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.191 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # return 0 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=83261 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 83261 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 83261 ']' 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:27.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:27.191 01:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.191 [2024-09-28 01:36:23.077168] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:27.191 [2024-09-28 01:36:23.077380] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.450 [2024-09-28 01:36:23.259150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.709 [2024-09-28 01:36:23.489959] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.709 [2024-09-28 01:36:23.490052] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.709 [2024-09-28 01:36:23.490094] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.709 [2024-09-28 01:36:23.490134] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.709 [2024-09-28 01:36:23.490165] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.709 [2024-09-28 01:36:23.490240] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.709 [2024-09-28 01:36:23.640679] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.277 [2024-09-28 01:36:24.129104] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.277 [2024-09-28 01:36:24.137274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:28.277 null0 00:23:28.277 [2024-09-28 01:36:24.169202] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=83293 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83293 /tmp/host.sock 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 83293 ']' 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.277 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.277 01:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.536 [2024-09-28 01:36:24.281384] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:28.536 [2024-09-28 01:36:24.281535] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83293 ] 00:23:28.536 [2024-09-28 01:36:24.441338] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.795 [2024-09-28 01:36:24.664224] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.362 01:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:29.362 01:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:23:29.362 01:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:29.362 01:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:29.362 01:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.362 01:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:29.362 01:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.363 01:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:29.363 01:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.363 01:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:29.621 [2024-09-28 01:36:25.356773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:29.621 01:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.621 01:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:29.621 01:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.621 01:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:30.555 [2024-09-28 01:36:26.463604] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:30.555 [2024-09-28 01:36:26.463643] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:30.555 [2024-09-28 01:36:26.463679] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:30.555 [2024-09-28 01:36:26.469667] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:23:30.814 [2024-09-28 01:36:26.535354] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:30.814 [2024-09-28 01:36:26.535471] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:30.814 [2024-09-28 01:36:26.535574] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:30.814 [2024-09-28 01:36:26.535623] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:23:30.814 [2024-09-28 01:36:26.535673] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:30.814 [2024-09-28 01:36:26.542860] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:30.814 01:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:31.750 01:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:31.750 01:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.750 01:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:31.750 01:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:31.750 01:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.750 01:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:31.750 01:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:32.007 01:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.008 01:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:32.008 01:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:32.943 01:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:32.943 01:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.943 01:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:32.943 01:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.943 01:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:32.943 01:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:32.943 01:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:32.943 01:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.943 01:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:32.943 01:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:33.880 01:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:33.880 01:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.880 01:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:33.880 01:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.880 01:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:33.880 01:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:33.880 01:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.139 01:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.139 01:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:34.139 01:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:35.075 01:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:35.075 01:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.075 01:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:35.075 01:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:35.075 01:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.075 01:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:35.075 01:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.075 01:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.075 01:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:35.075 01:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:36.012 01:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:36.012 01:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.012 01:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:36.012 01:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.012 01:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:36.012 01:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:36.012 01:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:36.271 01:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.271 [2024-09-28 01:36:31.963186] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:36.271 [2024-09-28 01:36:31.963275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.271 [2024-09-28 01:36:31.963298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.271 [2024-09-28 01:36:31.963325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.271 [2024-09-28 01:36:31.963354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.271 [2024-09-28 01:36:31.963367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.272 [2024-09-28 01:36:31.963393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.272 [2024-09-28 01:36:31.963421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.272 [2024-09-28 01:36:31.963448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.272 [2024-09-28 01:36:31.963478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.272 [2024-09-28 01:36:31.963490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.272 [2024-09-28 01:36:31.963504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:23:36.272 [2024-09-28 01:36:31.973173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:23:36.272 01:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:36.272 01:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:36.272 [2024-09-28 01:36:31.983225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:37.207 01:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.207 01:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.207 01:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.207 01:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.207 01:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.207 01:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.207 01:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.207 [2024-09-28 01:36:33.035561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:23:37.207 [2024-09-28 01:36:33.035967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:23:37.207 [2024-09-28 01:36:33.036034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:23:37.208 [2024-09-28 01:36:33.036143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:23:37.208 [2024-09-28 01:36:33.037529] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:37.208 [2024-09-28 01:36:33.037686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:37.208 [2024-09-28 01:36:33.037728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:37.208 [2024-09-28 01:36:33.037766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:37.208 [2024-09-28 01:36:33.037898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:37.208 [2024-09-28 01:36:33.037946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:37.208 01:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.208 01:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:37.208 01:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:38.146 [2024-09-28 01:36:34.038021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:38.146 [2024-09-28 01:36:34.038086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:38.146 [2024-09-28 01:36:34.038116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:38.146 [2024-09-28 01:36:34.038127] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:23:38.146 [2024-09-28 01:36:34.038157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:38.146 [2024-09-28 01:36:34.038210] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:23:38.146 [2024-09-28 01:36:34.038260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.146 [2024-09-28 01:36:34.038280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.146 [2024-09-28 01:36:34.038297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.146 [2024-09-28 01:36:34.038308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.146 [2024-09-28 01:36:34.038320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.146 [2024-09-28 01:36:34.038346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.146 [2024-09-28 01:36:34.038358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.146 [2024-09-28 01:36:34.038369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.146 [2024-09-28 01:36:34.038381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.146 [2024-09-28 01:36:34.038392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.146 [2024-09-28 01:36:34.038403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:38.146 [2024-09-28 01:36:34.038992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:23:38.146 [2024-09-28 01:36:34.040022] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:38.146 [2024-09-28 01:36:34.040058] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:38.146 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:38.146 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.146 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:38.146 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.146 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:38.146 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:38.146 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:38.406 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.406 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:38.406 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:38.406 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:38.406 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:38.406 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:38.406 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.406 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.406 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:38.406 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:38.406 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:38.406 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:38.406 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.406 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:38.406 01:36:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:39.343 01:36:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:39.343 01:36:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.343 01:36:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:39.343 01:36:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.343 01:36:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:39.343 01:36:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.343 01:36:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:39.343 01:36:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.343 01:36:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:39.343 01:36:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:40.308 [2024-09-28 01:36:36.048464] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:40.308 [2024-09-28 01:36:36.048527] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:40.308 [2024-09-28 01:36:36.048558] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:40.308 [2024-09-28 01:36:36.054544] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:23:40.308 [2024-09-28 01:36:36.111931] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:40.308 [2024-09-28 01:36:36.112014] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:40.308 [2024-09-28 01:36:36.112072] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:40.308 [2024-09-28 01:36:36.112130] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:23:40.308 [2024-09-28 01:36:36.112161] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:40.308 [2024-09-28 01:36:36.117303] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002c180 was disconnected and freed. delete nvme_qpair. 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 83293 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 83293 ']' 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 83293 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83293 00:23:40.574 killing process with pid 83293 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83293' 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 83293 00:23:40.574 01:36:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 83293 00:23:41.511 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:41.511 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:41.511 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:41.511 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:41.511 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:41.511 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:41.511 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:41.511 rmmod nvme_tcp 00:23:41.511 rmmod nvme_fabrics 00:23:41.511 rmmod nvme_keyring 00:23:41.511 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:41.511 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:41.511 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:41.511 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 83261 ']' 00:23:41.511 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 83261 00:23:41.511 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 83261 ']' 00:23:41.511 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 83261 00:23:41.511 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:23:41.511 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:41.512 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83261 00:23:41.512 killing process with pid 83261 00:23:41.512 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:41.512 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:41.512 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83261' 00:23:41.512 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 83261 00:23:41.512 01:36:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 83261 00:23:42.449 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:42.449 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:42.449 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:42.449 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:23:42.449 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:23:42.449 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:42.449 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:23:42.449 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:42.449 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:42.449 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:42.449 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:42.708 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:42.708 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:42.708 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:42.708 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:42.708 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:42.708 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:42.708 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:42.708 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:42.708 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:42.708 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:42.708 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:42.708 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:42.708 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.708 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.708 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.708 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:23:42.708 00:23:42.708 real 0m16.254s 00:23:42.708 user 0m27.351s 00:23:42.708 sys 0m2.579s 00:23:42.708 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:42.708 ************************************ 00:23:42.709 01:36:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:42.709 END TEST nvmf_discovery_remove_ifc 00:23:42.709 ************************************ 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.968 ************************************ 00:23:42.968 START TEST nvmf_identify_kernel_target 00:23:42.968 ************************************ 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:42.968 * Looking for test storage... 00:23:42.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:42.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.968 --rc genhtml_branch_coverage=1 00:23:42.968 --rc genhtml_function_coverage=1 00:23:42.968 --rc genhtml_legend=1 00:23:42.968 --rc geninfo_all_blocks=1 00:23:42.968 --rc geninfo_unexecuted_blocks=1 00:23:42.968 00:23:42.968 ' 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:42.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.968 --rc genhtml_branch_coverage=1 00:23:42.968 --rc genhtml_function_coverage=1 00:23:42.968 --rc genhtml_legend=1 00:23:42.968 --rc geninfo_all_blocks=1 00:23:42.968 --rc geninfo_unexecuted_blocks=1 00:23:42.968 00:23:42.968 ' 00:23:42.968 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:42.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.968 --rc genhtml_branch_coverage=1 00:23:42.968 --rc genhtml_function_coverage=1 00:23:42.968 --rc genhtml_legend=1 00:23:42.968 --rc geninfo_all_blocks=1 00:23:42.969 --rc geninfo_unexecuted_blocks=1 00:23:42.969 00:23:42.969 ' 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:42.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.969 --rc genhtml_branch_coverage=1 00:23:42.969 --rc genhtml_function_coverage=1 00:23:42.969 --rc genhtml_legend=1 00:23:42.969 --rc geninfo_all_blocks=1 00:23:42.969 --rc geninfo_unexecuted_blocks=1 00:23:42.969 00:23:42.969 ' 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:42.969 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:42.969 Cannot find device "nvmf_init_br" 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:42.969 Cannot find device "nvmf_init_br2" 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:42.969 Cannot find device "nvmf_tgt_br" 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:23:42.969 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:43.229 Cannot find device "nvmf_tgt_br2" 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:43.229 Cannot find device "nvmf_init_br" 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:43.229 Cannot find device "nvmf_init_br2" 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:43.229 Cannot find device "nvmf_tgt_br" 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:43.229 Cannot find device "nvmf_tgt_br2" 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:43.229 Cannot find device "nvmf_br" 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:43.229 Cannot find device "nvmf_init_if" 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:43.229 Cannot find device "nvmf_init_if2" 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:43.229 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:43.229 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:43.229 01:36:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:43.229 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:43.488 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:43.488 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:43.488 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:43.488 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:43.488 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:43.488 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:43.488 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:43.488 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:43.488 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:43.488 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:43.488 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:23:43.488 00:23:43.488 --- 10.0.0.3 ping statistics --- 00:23:43.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.488 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:43.488 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:43.488 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:43.488 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:23:43.488 00:23:43.488 --- 10.0.0.4 ping statistics --- 00:23:43.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.488 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:43.488 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:43.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:23:43.488 00:23:43.488 --- 10.0.0.1 ping statistics --- 00:23:43.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.488 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:23:43.488 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:43.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:23:43.488 00:23:43.488 --- 10.0.0.2 ping statistics --- 00:23:43.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.488 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:23:43.488 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.488 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # return 0 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:43.489 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:43.748 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:43.748 Waiting for block devices as requested 00:23:44.007 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:44.007 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:44.007 No valid GPT data, bailing 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:44.007 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:44.267 No valid GPT data, bailing 00:23:44.267 01:36:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:44.267 No valid GPT data, bailing 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:44.267 No valid GPT data, bailing 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:44.267 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -a 10.0.0.1 -t tcp -s 4420 00:23:44.527 00:23:44.527 Discovery Log Number of Records 2, Generation counter 2 00:23:44.527 =====Discovery Log Entry 0====== 00:23:44.527 trtype: tcp 00:23:44.527 adrfam: ipv4 00:23:44.527 subtype: current discovery subsystem 00:23:44.527 treq: not specified, sq flow control disable supported 00:23:44.527 portid: 1 00:23:44.527 trsvcid: 4420 00:23:44.527 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:44.527 traddr: 10.0.0.1 00:23:44.527 eflags: none 00:23:44.527 sectype: none 00:23:44.527 =====Discovery Log Entry 1====== 00:23:44.527 trtype: tcp 00:23:44.527 adrfam: ipv4 00:23:44.527 subtype: nvme subsystem 00:23:44.527 treq: not specified, sq flow control disable supported 00:23:44.527 portid: 1 00:23:44.527 trsvcid: 4420 00:23:44.527 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:44.527 traddr: 10.0.0.1 00:23:44.527 eflags: none 00:23:44.527 sectype: none 00:23:44.527 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:44.527 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:44.786 ===================================================== 00:23:44.786 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:44.786 ===================================================== 00:23:44.786 Controller Capabilities/Features 00:23:44.786 ================================ 00:23:44.786 Vendor ID: 0000 00:23:44.786 Subsystem Vendor ID: 0000 00:23:44.786 Serial Number: c055f31374ac15304ab6 00:23:44.786 Model Number: Linux 00:23:44.786 Firmware Version: 6.8.9-20 00:23:44.786 Recommended Arb Burst: 0 00:23:44.786 IEEE OUI Identifier: 00 00 00 00:23:44.786 Multi-path I/O 00:23:44.786 May have multiple subsystem ports: No 00:23:44.786 May have multiple controllers: No 00:23:44.786 Associated with SR-IOV VF: No 00:23:44.786 Max Data Transfer Size: Unlimited 00:23:44.786 Max Number of Namespaces: 0 00:23:44.786 Max Number of I/O Queues: 1024 00:23:44.786 NVMe Specification Version (VS): 1.3 00:23:44.786 NVMe Specification Version (Identify): 1.3 00:23:44.786 Maximum Queue Entries: 1024 00:23:44.786 Contiguous Queues Required: No 00:23:44.786 Arbitration Mechanisms Supported 00:23:44.786 Weighted Round Robin: Not Supported 00:23:44.786 Vendor Specific: Not Supported 00:23:44.786 Reset Timeout: 7500 ms 00:23:44.786 Doorbell Stride: 4 bytes 00:23:44.786 NVM Subsystem Reset: Not Supported 00:23:44.786 Command Sets Supported 00:23:44.786 NVM Command Set: Supported 00:23:44.786 Boot Partition: Not Supported 00:23:44.786 Memory Page Size Minimum: 4096 bytes 00:23:44.787 Memory Page Size Maximum: 4096 bytes 00:23:44.787 Persistent Memory Region: Not Supported 00:23:44.787 Optional Asynchronous Events Supported 00:23:44.787 Namespace Attribute Notices: Not Supported 00:23:44.787 Firmware Activation Notices: Not Supported 00:23:44.787 ANA Change Notices: Not Supported 00:23:44.787 PLE Aggregate Log Change Notices: Not Supported 00:23:44.787 LBA Status Info Alert Notices: Not Supported 00:23:44.787 EGE Aggregate Log Change Notices: Not Supported 00:23:44.787 Normal NVM Subsystem Shutdown event: Not Supported 00:23:44.787 Zone Descriptor Change Notices: Not Supported 00:23:44.787 Discovery Log Change Notices: Supported 00:23:44.787 Controller Attributes 00:23:44.787 128-bit Host Identifier: Not Supported 00:23:44.787 Non-Operational Permissive Mode: Not Supported 00:23:44.787 NVM Sets: Not Supported 00:23:44.787 Read Recovery Levels: Not Supported 00:23:44.787 Endurance Groups: Not Supported 00:23:44.787 Predictable Latency Mode: Not Supported 00:23:44.787 Traffic Based Keep ALive: Not Supported 00:23:44.787 Namespace Granularity: Not Supported 00:23:44.787 SQ Associations: Not Supported 00:23:44.787 UUID List: Not Supported 00:23:44.787 Multi-Domain Subsystem: Not Supported 00:23:44.787 Fixed Capacity Management: Not Supported 00:23:44.787 Variable Capacity Management: Not Supported 00:23:44.787 Delete Endurance Group: Not Supported 00:23:44.787 Delete NVM Set: Not Supported 00:23:44.787 Extended LBA Formats Supported: Not Supported 00:23:44.787 Flexible Data Placement Supported: Not Supported 00:23:44.787 00:23:44.787 Controller Memory Buffer Support 00:23:44.787 ================================ 00:23:44.787 Supported: No 00:23:44.787 00:23:44.787 Persistent Memory Region Support 00:23:44.787 ================================ 00:23:44.787 Supported: No 00:23:44.787 00:23:44.787 Admin Command Set Attributes 00:23:44.787 ============================ 00:23:44.787 Security Send/Receive: Not Supported 00:23:44.787 Format NVM: Not Supported 00:23:44.787 Firmware Activate/Download: Not Supported 00:23:44.787 Namespace Management: Not Supported 00:23:44.787 Device Self-Test: Not Supported 00:23:44.787 Directives: Not Supported 00:23:44.787 NVMe-MI: Not Supported 00:23:44.787 Virtualization Management: Not Supported 00:23:44.787 Doorbell Buffer Config: Not Supported 00:23:44.787 Get LBA Status Capability: Not Supported 00:23:44.787 Command & Feature Lockdown Capability: Not Supported 00:23:44.787 Abort Command Limit: 1 00:23:44.787 Async Event Request Limit: 1 00:23:44.787 Number of Firmware Slots: N/A 00:23:44.787 Firmware Slot 1 Read-Only: N/A 00:23:44.787 Firmware Activation Without Reset: N/A 00:23:44.787 Multiple Update Detection Support: N/A 00:23:44.787 Firmware Update Granularity: No Information Provided 00:23:44.787 Per-Namespace SMART Log: No 00:23:44.787 Asymmetric Namespace Access Log Page: Not Supported 00:23:44.787 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:44.787 Command Effects Log Page: Not Supported 00:23:44.787 Get Log Page Extended Data: Supported 00:23:44.787 Telemetry Log Pages: Not Supported 00:23:44.787 Persistent Event Log Pages: Not Supported 00:23:44.787 Supported Log Pages Log Page: May Support 00:23:44.787 Commands Supported & Effects Log Page: Not Supported 00:23:44.787 Feature Identifiers & Effects Log Page:May Support 00:23:44.787 NVMe-MI Commands & Effects Log Page: May Support 00:23:44.787 Data Area 4 for Telemetry Log: Not Supported 00:23:44.787 Error Log Page Entries Supported: 1 00:23:44.787 Keep Alive: Not Supported 00:23:44.787 00:23:44.787 NVM Command Set Attributes 00:23:44.787 ========================== 00:23:44.787 Submission Queue Entry Size 00:23:44.787 Max: 1 00:23:44.787 Min: 1 00:23:44.787 Completion Queue Entry Size 00:23:44.787 Max: 1 00:23:44.787 Min: 1 00:23:44.787 Number of Namespaces: 0 00:23:44.787 Compare Command: Not Supported 00:23:44.787 Write Uncorrectable Command: Not Supported 00:23:44.787 Dataset Management Command: Not Supported 00:23:44.787 Write Zeroes Command: Not Supported 00:23:44.787 Set Features Save Field: Not Supported 00:23:44.787 Reservations: Not Supported 00:23:44.787 Timestamp: Not Supported 00:23:44.787 Copy: Not Supported 00:23:44.787 Volatile Write Cache: Not Present 00:23:44.787 Atomic Write Unit (Normal): 1 00:23:44.787 Atomic Write Unit (PFail): 1 00:23:44.787 Atomic Compare & Write Unit: 1 00:23:44.787 Fused Compare & Write: Not Supported 00:23:44.787 Scatter-Gather List 00:23:44.787 SGL Command Set: Supported 00:23:44.787 SGL Keyed: Not Supported 00:23:44.787 SGL Bit Bucket Descriptor: Not Supported 00:23:44.787 SGL Metadata Pointer: Not Supported 00:23:44.787 Oversized SGL: Not Supported 00:23:44.787 SGL Metadata Address: Not Supported 00:23:44.787 SGL Offset: Supported 00:23:44.787 Transport SGL Data Block: Not Supported 00:23:44.787 Replay Protected Memory Block: Not Supported 00:23:44.787 00:23:44.787 Firmware Slot Information 00:23:44.787 ========================= 00:23:44.787 Active slot: 0 00:23:44.787 00:23:44.787 00:23:44.787 Error Log 00:23:44.787 ========= 00:23:44.787 00:23:44.787 Active Namespaces 00:23:44.787 ================= 00:23:44.787 Discovery Log Page 00:23:44.787 ================== 00:23:44.787 Generation Counter: 2 00:23:44.787 Number of Records: 2 00:23:44.787 Record Format: 0 00:23:44.787 00:23:44.787 Discovery Log Entry 0 00:23:44.787 ---------------------- 00:23:44.787 Transport Type: 3 (TCP) 00:23:44.787 Address Family: 1 (IPv4) 00:23:44.787 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:44.787 Entry Flags: 00:23:44.787 Duplicate Returned Information: 0 00:23:44.787 Explicit Persistent Connection Support for Discovery: 0 00:23:44.787 Transport Requirements: 00:23:44.787 Secure Channel: Not Specified 00:23:44.787 Port ID: 1 (0x0001) 00:23:44.787 Controller ID: 65535 (0xffff) 00:23:44.787 Admin Max SQ Size: 32 00:23:44.787 Transport Service Identifier: 4420 00:23:44.787 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:44.787 Transport Address: 10.0.0.1 00:23:44.787 Discovery Log Entry 1 00:23:44.787 ---------------------- 00:23:44.787 Transport Type: 3 (TCP) 00:23:44.787 Address Family: 1 (IPv4) 00:23:44.787 Subsystem Type: 2 (NVM Subsystem) 00:23:44.787 Entry Flags: 00:23:44.787 Duplicate Returned Information: 0 00:23:44.787 Explicit Persistent Connection Support for Discovery: 0 00:23:44.787 Transport Requirements: 00:23:44.787 Secure Channel: Not Specified 00:23:44.787 Port ID: 1 (0x0001) 00:23:44.787 Controller ID: 65535 (0xffff) 00:23:44.787 Admin Max SQ Size: 32 00:23:44.787 Transport Service Identifier: 4420 00:23:44.787 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:44.787 Transport Address: 10.0.0.1 00:23:44.787 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:45.047 get_feature(0x01) failed 00:23:45.047 get_feature(0x02) failed 00:23:45.047 get_feature(0x04) failed 00:23:45.047 ===================================================== 00:23:45.047 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:45.047 ===================================================== 00:23:45.047 Controller Capabilities/Features 00:23:45.047 ================================ 00:23:45.047 Vendor ID: 0000 00:23:45.047 Subsystem Vendor ID: 0000 00:23:45.047 Serial Number: ddcaa84b61dca02554be 00:23:45.047 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:45.047 Firmware Version: 6.8.9-20 00:23:45.047 Recommended Arb Burst: 6 00:23:45.047 IEEE OUI Identifier: 00 00 00 00:23:45.047 Multi-path I/O 00:23:45.047 May have multiple subsystem ports: Yes 00:23:45.047 May have multiple controllers: Yes 00:23:45.047 Associated with SR-IOV VF: No 00:23:45.047 Max Data Transfer Size: Unlimited 00:23:45.047 Max Number of Namespaces: 1024 00:23:45.047 Max Number of I/O Queues: 128 00:23:45.047 NVMe Specification Version (VS): 1.3 00:23:45.047 NVMe Specification Version (Identify): 1.3 00:23:45.047 Maximum Queue Entries: 1024 00:23:45.047 Contiguous Queues Required: No 00:23:45.047 Arbitration Mechanisms Supported 00:23:45.047 Weighted Round Robin: Not Supported 00:23:45.047 Vendor Specific: Not Supported 00:23:45.047 Reset Timeout: 7500 ms 00:23:45.047 Doorbell Stride: 4 bytes 00:23:45.047 NVM Subsystem Reset: Not Supported 00:23:45.047 Command Sets Supported 00:23:45.047 NVM Command Set: Supported 00:23:45.047 Boot Partition: Not Supported 00:23:45.047 Memory Page Size Minimum: 4096 bytes 00:23:45.047 Memory Page Size Maximum: 4096 bytes 00:23:45.047 Persistent Memory Region: Not Supported 00:23:45.047 Optional Asynchronous Events Supported 00:23:45.047 Namespace Attribute Notices: Supported 00:23:45.047 Firmware Activation Notices: Not Supported 00:23:45.047 ANA Change Notices: Supported 00:23:45.047 PLE Aggregate Log Change Notices: Not Supported 00:23:45.047 LBA Status Info Alert Notices: Not Supported 00:23:45.047 EGE Aggregate Log Change Notices: Not Supported 00:23:45.047 Normal NVM Subsystem Shutdown event: Not Supported 00:23:45.047 Zone Descriptor Change Notices: Not Supported 00:23:45.047 Discovery Log Change Notices: Not Supported 00:23:45.047 Controller Attributes 00:23:45.047 128-bit Host Identifier: Supported 00:23:45.047 Non-Operational Permissive Mode: Not Supported 00:23:45.048 NVM Sets: Not Supported 00:23:45.048 Read Recovery Levels: Not Supported 00:23:45.048 Endurance Groups: Not Supported 00:23:45.048 Predictable Latency Mode: Not Supported 00:23:45.048 Traffic Based Keep ALive: Supported 00:23:45.048 Namespace Granularity: Not Supported 00:23:45.048 SQ Associations: Not Supported 00:23:45.048 UUID List: Not Supported 00:23:45.048 Multi-Domain Subsystem: Not Supported 00:23:45.048 Fixed Capacity Management: Not Supported 00:23:45.048 Variable Capacity Management: Not Supported 00:23:45.048 Delete Endurance Group: Not Supported 00:23:45.048 Delete NVM Set: Not Supported 00:23:45.048 Extended LBA Formats Supported: Not Supported 00:23:45.048 Flexible Data Placement Supported: Not Supported 00:23:45.048 00:23:45.048 Controller Memory Buffer Support 00:23:45.048 ================================ 00:23:45.048 Supported: No 00:23:45.048 00:23:45.048 Persistent Memory Region Support 00:23:45.048 ================================ 00:23:45.048 Supported: No 00:23:45.048 00:23:45.048 Admin Command Set Attributes 00:23:45.048 ============================ 00:23:45.048 Security Send/Receive: Not Supported 00:23:45.048 Format NVM: Not Supported 00:23:45.048 Firmware Activate/Download: Not Supported 00:23:45.048 Namespace Management: Not Supported 00:23:45.048 Device Self-Test: Not Supported 00:23:45.048 Directives: Not Supported 00:23:45.048 NVMe-MI: Not Supported 00:23:45.048 Virtualization Management: Not Supported 00:23:45.048 Doorbell Buffer Config: Not Supported 00:23:45.048 Get LBA Status Capability: Not Supported 00:23:45.048 Command & Feature Lockdown Capability: Not Supported 00:23:45.048 Abort Command Limit: 4 00:23:45.048 Async Event Request Limit: 4 00:23:45.048 Number of Firmware Slots: N/A 00:23:45.048 Firmware Slot 1 Read-Only: N/A 00:23:45.048 Firmware Activation Without Reset: N/A 00:23:45.048 Multiple Update Detection Support: N/A 00:23:45.048 Firmware Update Granularity: No Information Provided 00:23:45.048 Per-Namespace SMART Log: Yes 00:23:45.048 Asymmetric Namespace Access Log Page: Supported 00:23:45.048 ANA Transition Time : 10 sec 00:23:45.048 00:23:45.048 Asymmetric Namespace Access Capabilities 00:23:45.048 ANA Optimized State : Supported 00:23:45.048 ANA Non-Optimized State : Supported 00:23:45.048 ANA Inaccessible State : Supported 00:23:45.048 ANA Persistent Loss State : Supported 00:23:45.048 ANA Change State : Supported 00:23:45.048 ANAGRPID is not changed : No 00:23:45.048 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:45.048 00:23:45.048 ANA Group Identifier Maximum : 128 00:23:45.048 Number of ANA Group Identifiers : 128 00:23:45.048 Max Number of Allowed Namespaces : 1024 00:23:45.048 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:45.048 Command Effects Log Page: Supported 00:23:45.048 Get Log Page Extended Data: Supported 00:23:45.048 Telemetry Log Pages: Not Supported 00:23:45.048 Persistent Event Log Pages: Not Supported 00:23:45.048 Supported Log Pages Log Page: May Support 00:23:45.048 Commands Supported & Effects Log Page: Not Supported 00:23:45.048 Feature Identifiers & Effects Log Page:May Support 00:23:45.048 NVMe-MI Commands & Effects Log Page: May Support 00:23:45.048 Data Area 4 for Telemetry Log: Not Supported 00:23:45.048 Error Log Page Entries Supported: 128 00:23:45.048 Keep Alive: Supported 00:23:45.048 Keep Alive Granularity: 1000 ms 00:23:45.048 00:23:45.048 NVM Command Set Attributes 00:23:45.048 ========================== 00:23:45.048 Submission Queue Entry Size 00:23:45.048 Max: 64 00:23:45.048 Min: 64 00:23:45.048 Completion Queue Entry Size 00:23:45.048 Max: 16 00:23:45.048 Min: 16 00:23:45.048 Number of Namespaces: 1024 00:23:45.048 Compare Command: Not Supported 00:23:45.048 Write Uncorrectable Command: Not Supported 00:23:45.048 Dataset Management Command: Supported 00:23:45.048 Write Zeroes Command: Supported 00:23:45.048 Set Features Save Field: Not Supported 00:23:45.048 Reservations: Not Supported 00:23:45.048 Timestamp: Not Supported 00:23:45.048 Copy: Not Supported 00:23:45.048 Volatile Write Cache: Present 00:23:45.048 Atomic Write Unit (Normal): 1 00:23:45.048 Atomic Write Unit (PFail): 1 00:23:45.048 Atomic Compare & Write Unit: 1 00:23:45.048 Fused Compare & Write: Not Supported 00:23:45.048 Scatter-Gather List 00:23:45.048 SGL Command Set: Supported 00:23:45.048 SGL Keyed: Not Supported 00:23:45.048 SGL Bit Bucket Descriptor: Not Supported 00:23:45.048 SGL Metadata Pointer: Not Supported 00:23:45.048 Oversized SGL: Not Supported 00:23:45.048 SGL Metadata Address: Not Supported 00:23:45.048 SGL Offset: Supported 00:23:45.048 Transport SGL Data Block: Not Supported 00:23:45.048 Replay Protected Memory Block: Not Supported 00:23:45.048 00:23:45.048 Firmware Slot Information 00:23:45.048 ========================= 00:23:45.048 Active slot: 0 00:23:45.048 00:23:45.048 Asymmetric Namespace Access 00:23:45.048 =========================== 00:23:45.048 Change Count : 0 00:23:45.048 Number of ANA Group Descriptors : 1 00:23:45.048 ANA Group Descriptor : 0 00:23:45.048 ANA Group ID : 1 00:23:45.048 Number of NSID Values : 1 00:23:45.048 Change Count : 0 00:23:45.048 ANA State : 1 00:23:45.048 Namespace Identifier : 1 00:23:45.048 00:23:45.048 Commands Supported and Effects 00:23:45.048 ============================== 00:23:45.048 Admin Commands 00:23:45.048 -------------- 00:23:45.048 Get Log Page (02h): Supported 00:23:45.048 Identify (06h): Supported 00:23:45.048 Abort (08h): Supported 00:23:45.048 Set Features (09h): Supported 00:23:45.048 Get Features (0Ah): Supported 00:23:45.048 Asynchronous Event Request (0Ch): Supported 00:23:45.048 Keep Alive (18h): Supported 00:23:45.048 I/O Commands 00:23:45.048 ------------ 00:23:45.048 Flush (00h): Supported 00:23:45.048 Write (01h): Supported LBA-Change 00:23:45.048 Read (02h): Supported 00:23:45.048 Write Zeroes (08h): Supported LBA-Change 00:23:45.048 Dataset Management (09h): Supported 00:23:45.048 00:23:45.048 Error Log 00:23:45.048 ========= 00:23:45.048 Entry: 0 00:23:45.048 Error Count: 0x3 00:23:45.048 Submission Queue Id: 0x0 00:23:45.048 Command Id: 0x5 00:23:45.048 Phase Bit: 0 00:23:45.048 Status Code: 0x2 00:23:45.048 Status Code Type: 0x0 00:23:45.048 Do Not Retry: 1 00:23:45.048 Error Location: 0x28 00:23:45.048 LBA: 0x0 00:23:45.048 Namespace: 0x0 00:23:45.048 Vendor Log Page: 0x0 00:23:45.048 ----------- 00:23:45.048 Entry: 1 00:23:45.048 Error Count: 0x2 00:23:45.048 Submission Queue Id: 0x0 00:23:45.048 Command Id: 0x5 00:23:45.048 Phase Bit: 0 00:23:45.048 Status Code: 0x2 00:23:45.048 Status Code Type: 0x0 00:23:45.048 Do Not Retry: 1 00:23:45.048 Error Location: 0x28 00:23:45.048 LBA: 0x0 00:23:45.048 Namespace: 0x0 00:23:45.048 Vendor Log Page: 0x0 00:23:45.048 ----------- 00:23:45.048 Entry: 2 00:23:45.048 Error Count: 0x1 00:23:45.048 Submission Queue Id: 0x0 00:23:45.048 Command Id: 0x4 00:23:45.048 Phase Bit: 0 00:23:45.048 Status Code: 0x2 00:23:45.048 Status Code Type: 0x0 00:23:45.048 Do Not Retry: 1 00:23:45.048 Error Location: 0x28 00:23:45.048 LBA: 0x0 00:23:45.048 Namespace: 0x0 00:23:45.048 Vendor Log Page: 0x0 00:23:45.048 00:23:45.048 Number of Queues 00:23:45.048 ================ 00:23:45.048 Number of I/O Submission Queues: 128 00:23:45.048 Number of I/O Completion Queues: 128 00:23:45.048 00:23:45.048 ZNS Specific Controller Data 00:23:45.048 ============================ 00:23:45.048 Zone Append Size Limit: 0 00:23:45.048 00:23:45.048 00:23:45.048 Active Namespaces 00:23:45.048 ================= 00:23:45.048 get_feature(0x05) failed 00:23:45.048 Namespace ID:1 00:23:45.048 Command Set Identifier: NVM (00h) 00:23:45.048 Deallocate: Supported 00:23:45.048 Deallocated/Unwritten Error: Not Supported 00:23:45.048 Deallocated Read Value: Unknown 00:23:45.048 Deallocate in Write Zeroes: Not Supported 00:23:45.048 Deallocated Guard Field: 0xFFFF 00:23:45.048 Flush: Supported 00:23:45.048 Reservation: Not Supported 00:23:45.048 Namespace Sharing Capabilities: Multiple Controllers 00:23:45.048 Size (in LBAs): 1310720 (5GiB) 00:23:45.048 Capacity (in LBAs): 1310720 (5GiB) 00:23:45.048 Utilization (in LBAs): 1310720 (5GiB) 00:23:45.048 UUID: 2e984a36-5c8f-4490-afd6-f7b6f778e009 00:23:45.048 Thin Provisioning: Not Supported 00:23:45.048 Per-NS Atomic Units: Yes 00:23:45.048 Atomic Boundary Size (Normal): 0 00:23:45.048 Atomic Boundary Size (PFail): 0 00:23:45.048 Atomic Boundary Offset: 0 00:23:45.048 NGUID/EUI64 Never Reused: No 00:23:45.048 ANA group ID: 1 00:23:45.049 Namespace Write Protected: No 00:23:45.049 Number of LBA Formats: 1 00:23:45.049 Current LBA Format: LBA Format #00 00:23:45.049 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:23:45.049 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:45.049 rmmod nvme_tcp 00:23:45.049 rmmod nvme_fabrics 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:45.049 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:45.308 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:45.308 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:45.308 01:36:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:45.308 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:45.308 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:45.308 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:45.308 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:45.308 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:45.308 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.308 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.308 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.308 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:23:45.308 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:45.308 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:45.308 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:23:45.309 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:45.309 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:45.309 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:45.309 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:45.309 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:23:45.309 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:23:45.309 01:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:46.247 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:46.247 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:46.247 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:46.247 00:23:46.247 real 0m3.429s 00:23:46.247 user 0m1.266s 00:23:46.247 sys 0m1.515s 00:23:46.247 01:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:46.247 ************************************ 00:23:46.247 END TEST nvmf_identify_kernel_target 00:23:46.247 ************************************ 00:23:46.247 01:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.247 01:36:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:46.247 01:36:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:46.247 01:36:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:46.247 01:36:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.247 ************************************ 00:23:46.247 START TEST nvmf_auth_host 00:23:46.247 ************************************ 00:23:46.247 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:46.508 * Looking for test storage... 00:23:46.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:46.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.508 --rc genhtml_branch_coverage=1 00:23:46.508 --rc genhtml_function_coverage=1 00:23:46.508 --rc genhtml_legend=1 00:23:46.508 --rc geninfo_all_blocks=1 00:23:46.508 --rc geninfo_unexecuted_blocks=1 00:23:46.508 00:23:46.508 ' 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:46.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.508 --rc genhtml_branch_coverage=1 00:23:46.508 --rc genhtml_function_coverage=1 00:23:46.508 --rc genhtml_legend=1 00:23:46.508 --rc geninfo_all_blocks=1 00:23:46.508 --rc geninfo_unexecuted_blocks=1 00:23:46.508 00:23:46.508 ' 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:46.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.508 --rc genhtml_branch_coverage=1 00:23:46.508 --rc genhtml_function_coverage=1 00:23:46.508 --rc genhtml_legend=1 00:23:46.508 --rc geninfo_all_blocks=1 00:23:46.508 --rc geninfo_unexecuted_blocks=1 00:23:46.508 00:23:46.508 ' 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:46.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.508 --rc genhtml_branch_coverage=1 00:23:46.508 --rc genhtml_function_coverage=1 00:23:46.508 --rc genhtml_legend=1 00:23:46.508 --rc geninfo_all_blocks=1 00:23:46.508 --rc geninfo_unexecuted_blocks=1 00:23:46.508 00:23:46.508 ' 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:46.508 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:46.508 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:46.509 Cannot find device "nvmf_init_br" 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:46.509 Cannot find device "nvmf_init_br2" 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:46.509 Cannot find device "nvmf_tgt_br" 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:46.509 Cannot find device "nvmf_tgt_br2" 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:46.509 Cannot find device "nvmf_init_br" 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:23:46.509 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:46.768 Cannot find device "nvmf_init_br2" 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:46.768 Cannot find device "nvmf_tgt_br" 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:46.768 Cannot find device "nvmf_tgt_br2" 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:46.768 Cannot find device "nvmf_br" 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:46.768 Cannot find device "nvmf_init_if" 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:46.768 Cannot find device "nvmf_init_if2" 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:46.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:46.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:46.768 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:47.027 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:47.027 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:47.027 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:47.027 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:47.027 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:47.027 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:47.027 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:47.027 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:47.027 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:23:47.027 00:23:47.027 --- 10.0.0.3 ping statistics --- 00:23:47.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.027 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:23:47.027 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:47.027 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:47.027 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:23:47.027 00:23:47.027 --- 10.0.0.4 ping statistics --- 00:23:47.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.027 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:47.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:47.028 00:23:47.028 --- 10.0.0.1 ping statistics --- 00:23:47.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.028 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:47.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:23:47.028 00:23:47.028 --- 10.0.0.2 ping statistics --- 00:23:47.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.028 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # return 0 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=84316 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 84316 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 84316 ']' 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:47.028 01:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.965 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:47.965 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:23:47.965 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:47.965 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:47.965 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=d08a6cefe6cf562cd3bf1e1c8950f485 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.NI6 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key d08a6cefe6cf562cd3bf1e1c8950f485 0 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 d08a6cefe6cf562cd3bf1e1c8950f485 0 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=d08a6cefe6cf562cd3bf1e1c8950f485 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.NI6 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.NI6 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.NI6 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=9ae238ee9dace43dc10eb079521a421343beae997077889a76dd4fad557ea721 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.xDi 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 9ae238ee9dace43dc10eb079521a421343beae997077889a76dd4fad557ea721 3 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 9ae238ee9dace43dc10eb079521a421343beae997077889a76dd4fad557ea721 3 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=9ae238ee9dace43dc10eb079521a421343beae997077889a76dd4fad557ea721 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:23:48.225 01:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.xDi 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.xDi 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.xDi 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=255d093be73022c68555f0218b54ea60f804bcdc9336f887 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.rAM 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 255d093be73022c68555f0218b54ea60f804bcdc9336f887 0 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 255d093be73022c68555f0218b54ea60f804bcdc9336f887 0 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=255d093be73022c68555f0218b54ea60f804bcdc9336f887 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.rAM 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.rAM 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.rAM 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=2cbfee1f2a15ec726bc33eb729ba0405713570ba16bd43ea 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.g4i 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 2cbfee1f2a15ec726bc33eb729ba0405713570ba16bd43ea 2 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 2cbfee1f2a15ec726bc33eb729ba0405713570ba16bd43ea 2 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=2cbfee1f2a15ec726bc33eb729ba0405713570ba16bd43ea 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:23:48.225 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.g4i 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.g4i 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.g4i 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=262dc19628e6e4786a72e2395073200a 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.yk4 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 262dc19628e6e4786a72e2395073200a 1 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 262dc19628e6e4786a72e2395073200a 1 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=262dc19628e6e4786a72e2395073200a 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.yk4 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.yk4 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.yk4 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=63176cbef8628e6846e55af75a4cb7a8 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.p2b 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 63176cbef8628e6846e55af75a4cb7a8 1 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 63176cbef8628e6846e55af75a4cb7a8 1 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=63176cbef8628e6846e55af75a4cb7a8 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.p2b 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.p2b 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.p2b 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=b459152d9dd8e7a8e1f08d01c75a2e267b2df7dbf3cddc12 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.9ZP 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key b459152d9dd8e7a8e1f08d01c75a2e267b2df7dbf3cddc12 2 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 b459152d9dd8e7a8e1f08d01c75a2e267b2df7dbf3cddc12 2 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=b459152d9dd8e7a8e1f08d01c75a2e267b2df7dbf3cddc12 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.9ZP 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.9ZP 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.9ZP 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=5875e34f223ed8d1325b64fa07904e13 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.XZb 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 5875e34f223ed8d1325b64fa07904e13 0 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 5875e34f223ed8d1325b64fa07904e13 0 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=5875e34f223ed8d1325b64fa07904e13 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:23:48.484 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.XZb 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.XZb 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.XZb 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=83fce147f0298accd22293853633cec60a1eb04d5dcf4a9bfd9a27dbfad68cb3 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.Zu9 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 83fce147f0298accd22293853633cec60a1eb04d5dcf4a9bfd9a27dbfad68cb3 3 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 83fce147f0298accd22293853633cec60a1eb04d5dcf4a9bfd9a27dbfad68cb3 3 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=83fce147f0298accd22293853633cec60a1eb04d5dcf4a9bfd9a27dbfad68cb3 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.Zu9 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.Zu9 00:23:48.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Zu9 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 84316 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 84316 ']' 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.742 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NI6 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.xDi ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xDi 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.rAM 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.g4i ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.g4i 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.yk4 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.p2b ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p2b 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.9ZP 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.XZb ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.XZb 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Zu9 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:49.002 01:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:49.260 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:49.519 Waiting for block devices as requested 00:23:49.519 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:49.519 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:50.086 01:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:23:50.086 01:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:50.086 01:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:23:50.086 01:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:50.086 01:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:50.086 01:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:50.086 01:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:23:50.086 01:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:50.086 01:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:50.347 No valid GPT data, bailing 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:50.347 No valid GPT data, bailing 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:50.347 No valid GPT data, bailing 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:50.347 No valid GPT data, bailing 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:50.347 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:50.606 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:23:50.606 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:23:50.606 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:50.606 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:50.606 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:50.606 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:50.606 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:23:50.606 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:23:50.606 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:23:50.606 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:23:50.606 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:23:50.606 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:23:50.606 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -a 10.0.0.1 -t tcp -s 4420 00:23:50.607 00:23:50.607 Discovery Log Number of Records 2, Generation counter 2 00:23:50.607 =====Discovery Log Entry 0====== 00:23:50.607 trtype: tcp 00:23:50.607 adrfam: ipv4 00:23:50.607 subtype: current discovery subsystem 00:23:50.607 treq: not specified, sq flow control disable supported 00:23:50.607 portid: 1 00:23:50.607 trsvcid: 4420 00:23:50.607 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:50.607 traddr: 10.0.0.1 00:23:50.607 eflags: none 00:23:50.607 sectype: none 00:23:50.607 =====Discovery Log Entry 1====== 00:23:50.607 trtype: tcp 00:23:50.607 adrfam: ipv4 00:23:50.607 subtype: nvme subsystem 00:23:50.607 treq: not specified, sq flow control disable supported 00:23:50.607 portid: 1 00:23:50.607 trsvcid: 4420 00:23:50.607 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:50.607 traddr: 10.0.0.1 00:23:50.607 eflags: none 00:23:50.607 sectype: none 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.607 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.867 nvme0n1 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: ]] 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.867 nvme0n1 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.867 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.127 nvme0n1 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:51.127 01:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: ]] 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.127 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:51.128 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:51.128 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:51.128 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.128 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.128 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:51.128 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.128 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:51.128 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:51.128 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:51.128 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.128 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.128 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.387 nvme0n1 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: ]] 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:51.387 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:51.388 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:51.388 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.388 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.388 nvme0n1 00:23:51.388 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.388 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.388 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.388 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.388 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.647 nvme0n1 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.647 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: ]] 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:52.215 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.216 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.216 01:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.216 nvme0n1 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.216 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.475 nvme0n1 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: ]] 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:52.475 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.476 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.734 nvme0n1 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: ]] 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.734 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.735 nvme0n1 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.735 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:52.993 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.994 nvme0n1 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:52.994 01:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: ]] 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.561 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.821 nvme0n1 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.821 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.080 nvme0n1 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:23:54.080 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: ]] 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.081 01:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.339 nvme0n1 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: ]] 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.339 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.598 nvme0n1 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:54.598 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.599 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.858 nvme0n1 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.858 01:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:56.763 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: ]] 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.764 nvme0n1 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.764 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.024 nvme0n1 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: ]] 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:57.024 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.284 01:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.543 nvme0n1 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: ]] 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.543 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.803 nvme0n1 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.803 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.062 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.062 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.062 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:58.062 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:58.062 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:58.062 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.062 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.062 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:58.062 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.062 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:58.062 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:58.062 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:58.062 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:58.062 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.062 01:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.322 nvme0n1 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: ]] 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.322 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.890 nvme0n1 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:58.890 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:58.891 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.891 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.891 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:58.891 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.891 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:58.891 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:58.891 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:58.891 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.891 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.891 01:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.479 nvme0n1 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: ]] 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.479 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.075 nvme0n1 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: ]] 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.075 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.076 01:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.647 nvme0n1 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:00.647 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:00.648 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:00.907 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:00.907 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.907 01:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.166 nvme0n1 00:24:01.166 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.166 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.166 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.166 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.166 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: ]] 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.426 nvme0n1 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.426 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.686 nvme0n1 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: ]] 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.686 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.945 nvme0n1 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: ]] 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:01.945 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.946 nvme0n1 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.946 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.205 01:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.205 nvme0n1 00:24:02.205 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.205 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.205 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.205 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.205 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.205 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: ]] 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.206 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.465 nvme0n1 00:24:02.465 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.465 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.465 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.465 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.465 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.465 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.465 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.466 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.725 nvme0n1 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: ]] 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:02.725 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.726 nvme0n1 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.726 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: ]] 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:02.985 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:02.986 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.986 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.986 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:02.986 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.986 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:02.986 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:02.986 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:02.986 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:02.986 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.986 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.986 nvme0n1 00:24:02.986 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.986 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.986 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.986 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.986 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.986 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.245 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:03.246 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.246 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:03.246 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:03.246 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:03.246 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:03.246 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.246 01:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.246 nvme0n1 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: ]] 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.246 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.505 nvme0n1 00:24:03.505 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.505 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.505 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.505 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.505 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.506 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.765 nvme0n1 00:24:03.765 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.765 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.765 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.765 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.765 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.765 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.765 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.765 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.765 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.765 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: ]] 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.766 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.025 nvme0n1 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:04.025 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: ]] 00:24:04.026 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:04.026 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:04.026 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.026 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.026 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:04.026 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:04.026 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.026 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:04.026 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.026 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.284 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.284 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.284 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:04.285 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:04.285 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:04.285 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.285 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.285 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:04.285 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.285 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:04.285 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:04.285 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:04.285 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:04.285 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.285 01:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.285 nvme0n1 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.285 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.544 nvme0n1 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.544 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: ]] 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.545 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.113 nvme0n1 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.113 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.114 01:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.373 nvme0n1 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: ]] 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.373 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.632 nvme0n1 00:24:05.632 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.632 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.632 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.632 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.632 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: ]] 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.892 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.151 nvme0n1 00:24:06.151 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.151 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.151 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.151 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.151 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.151 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.151 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.151 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.151 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.151 01:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.151 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.151 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.151 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:06.151 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.151 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:06.151 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:06.151 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:06.151 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.152 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.411 nvme0n1 00:24:06.411 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.411 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.411 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.411 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.411 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: ]] 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:06.670 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.671 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.671 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:06.671 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.671 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:06.671 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:06.671 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:06.671 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:06.671 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.671 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.238 nvme0n1 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:24:07.238 01:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.238 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.807 nvme0n1 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: ]] 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.807 01:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.375 nvme0n1 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: ]] 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:08.375 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.376 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.376 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:08.376 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.376 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:08.376 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:08.376 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:08.376 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:08.376 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.376 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.943 nvme0n1 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.943 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.944 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.944 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:08.944 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:08.944 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:08.944 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.944 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.944 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:08.944 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.944 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:08.944 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:08.944 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:08.944 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:08.944 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.944 01:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.511 nvme0n1 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: ]] 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.511 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.771 nvme0n1 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.771 nvme0n1 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.771 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: ]] 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.030 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.031 nvme0n1 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: ]] 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.031 01:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.290 nvme0n1 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.290 nvme0n1 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.290 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: ]] 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.550 nvme0n1 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.550 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:10.551 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.810 nvme0n1 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: ]] 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.810 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.069 nvme0n1 00:24:11.069 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.069 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.069 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.069 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.069 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.069 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.069 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.069 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.069 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.069 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.069 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.069 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.069 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:11.069 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.069 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: ]] 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.070 01:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.329 nvme0n1 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.329 nvme0n1 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.329 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.586 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.586 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.586 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.586 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: ]] 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.587 nvme0n1 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.587 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.846 nvme0n1 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.846 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: ]] 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.106 nvme0n1 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.106 01:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.106 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: ]] 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.364 nvme0n1 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.364 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.623 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.623 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.623 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.623 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.623 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.623 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.623 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:12.623 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.623 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:12.623 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.624 nvme0n1 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.624 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: ]] 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.883 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.142 nvme0n1 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.142 01:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.401 nvme0n1 00:24:13.401 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.401 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.401 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.401 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.401 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.401 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.401 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.401 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.401 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.401 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: ]] 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.660 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.919 nvme0n1 00:24:13.919 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.919 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.919 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.919 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.919 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.919 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: ]] 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.920 01:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.178 nvme0n1 00:24:14.178 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.178 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.178 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.178 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.178 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.178 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.178 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.178 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.178 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.178 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.435 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.435 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.435 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.436 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.693 nvme0n1 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA4YTZjZWZlNmNmNTYyY2QzYmYxZTFjODk1MGY0ODXJONmF: 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: ]] 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWFlMjM4ZWU5ZGFjZTQzZGMxMGViMDc5NTIxYTQyMTM0M2JlYWU5OTcwNzc4ODlhNzZkZDRmYWQ1NTdlYTcyMXLKPwo=: 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.693 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.694 01:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.261 nvme0n1 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.261 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.827 nvme0n1 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: ]] 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.827 01:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.393 nvme0n1 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjQ1OTE1MmQ5ZGQ4ZTdhOGUxZjA4ZDAxYzc1YTJlMjY3YjJkZjdkYmYzY2RkYzEyj1HHHA==: 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: ]] 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg3NWUzNGYyMjNlZDhkMTMyNWI2NGZhMDc5MDRlMTNRYodm: 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.393 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.652 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:16.652 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:16.652 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:16.652 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.652 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.652 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:16.652 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.652 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:16.652 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:16.652 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:16.652 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:16.652 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.652 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.910 nvme0n1 00:24:16.910 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.910 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.910 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.910 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.910 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODNmY2UxNDdmMDI5OGFjY2QyMjI5Mzg1MzYzM2NlYzYwYTFlYjA0ZDVkY2Y0YTliZmQ5YTI3ZGJmYWQ2OGNiM+/wofw=: 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:17.185 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:17.186 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:17.186 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.186 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.186 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:17.186 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.186 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:17.186 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:17.186 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:17.186 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.186 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.186 01:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.798 nvme0n1 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.798 request: 00:24:17.798 { 00:24:17.798 "name": "nvme0", 00:24:17.798 "trtype": "tcp", 00:24:17.798 "traddr": "10.0.0.1", 00:24:17.798 "adrfam": "ipv4", 00:24:17.798 "trsvcid": "4420", 00:24:17.798 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:17.798 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:17.798 "prchk_reftag": false, 00:24:17.798 "prchk_guard": false, 00:24:17.798 "hdgst": false, 00:24:17.798 "ddgst": false, 00:24:17.798 "allow_unrecognized_csi": false, 00:24:17.798 "method": "bdev_nvme_attach_controller", 00:24:17.798 "req_id": 1 00:24:17.798 } 00:24:17.798 Got JSON-RPC error response 00:24:17.798 response: 00:24:17.798 { 00:24:17.798 "code": -5, 00:24:17.798 "message": "Input/output error" 00:24:17.798 } 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:17.798 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.799 request: 00:24:17.799 { 00:24:17.799 "name": "nvme0", 00:24:17.799 "trtype": "tcp", 00:24:17.799 "traddr": "10.0.0.1", 00:24:17.799 "adrfam": "ipv4", 00:24:17.799 "trsvcid": "4420", 00:24:17.799 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:17.799 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:17.799 "prchk_reftag": false, 00:24:17.799 "prchk_guard": false, 00:24:17.799 "hdgst": false, 00:24:17.799 "ddgst": false, 00:24:17.799 "dhchap_key": "key2", 00:24:17.799 "allow_unrecognized_csi": false, 00:24:17.799 "method": "bdev_nvme_attach_controller", 00:24:17.799 "req_id": 1 00:24:17.799 } 00:24:17.799 Got JSON-RPC error response 00:24:17.799 response: 00:24:17.799 { 00:24:17.799 "code": -5, 00:24:17.799 "message": "Input/output error" 00:24:17.799 } 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:17.799 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.058 request: 00:24:18.058 { 00:24:18.058 "name": "nvme0", 00:24:18.058 "trtype": "tcp", 00:24:18.058 "traddr": "10.0.0.1", 00:24:18.058 "adrfam": "ipv4", 00:24:18.058 "trsvcid": "4420", 00:24:18.058 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:18.058 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:18.058 "prchk_reftag": false, 00:24:18.058 "prchk_guard": false, 00:24:18.058 "hdgst": false, 00:24:18.058 "ddgst": false, 00:24:18.058 "dhchap_key": "key1", 00:24:18.058 "dhchap_ctrlr_key": "ckey2", 00:24:18.058 "allow_unrecognized_csi": false, 00:24:18.058 "method": "bdev_nvme_attach_controller", 00:24:18.058 "req_id": 1 00:24:18.058 } 00:24:18.058 Got JSON-RPC error response 00:24:18.058 response: 00:24:18.058 { 00:24:18.058 "code": -5, 00:24:18.058 "message": "Input/output error" 00:24:18.058 } 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.058 nvme0n1 00:24:18.058 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: ]] 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.059 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.059 request: 00:24:18.059 { 00:24:18.059 "name": "nvme0", 00:24:18.059 "dhchap_key": "key1", 00:24:18.318 "dhchap_ctrlr_key": "ckey2", 00:24:18.318 "method": "bdev_nvme_set_keys", 00:24:18.318 "req_id": 1 00:24:18.318 } 00:24:18.318 Got JSON-RPC error response 00:24:18.318 response: 00:24:18.318 { 00:24:18.318 "code": -13, 00:24:18.318 "message": "Permission denied" 00:24:18.318 } 00:24:18.318 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:18.318 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:24:18.318 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:18.318 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:18.318 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:18.318 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.318 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.318 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.318 01:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:18.318 01:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.318 01:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:18.318 01:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU1ZDA5M2JlNzMwMjJjNjg1NTVmMDIxOGI1NGVhNjBmODA0YmNkYzkzMzZmODg3EL0sug==: 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: ]] 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNiZmVlMWYyYTE1ZWM3MjZiYzMzZWI3MjliYTA0MDU3MTM1NzBiYTE2YmQ0M2VhbCW13Q==: 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.255 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.514 nvme0n1 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjYyZGMxOTYyOGU2ZTQ3ODZhNzJlMjM5NTA3MzIwMGEDLSPL: 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: ]] 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMxNzZjYmVmODYyOGU2ODQ2ZTU1YWY3NWE0Y2I3YTjFmjeL: 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.514 request: 00:24:19.514 { 00:24:19.514 "name": "nvme0", 00:24:19.514 "dhchap_key": "key2", 00:24:19.514 "dhchap_ctrlr_key": "ckey1", 00:24:19.514 "method": "bdev_nvme_set_keys", 00:24:19.514 "req_id": 1 00:24:19.514 } 00:24:19.514 Got JSON-RPC error response 00:24:19.514 response: 00:24:19.514 { 00:24:19.514 "code": -13, 00:24:19.514 "message": "Permission denied" 00:24:19.514 } 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:19.514 01:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:20.454 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.454 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:20.454 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.454 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.454 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.454 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:20.454 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:20.454 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:20.454 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:20.454 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:20.454 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:20.714 rmmod nvme_tcp 00:24:20.714 rmmod nvme_fabrics 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 84316 ']' 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 84316 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 84316 ']' 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 84316 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84316 00:24:20.714 killing process with pid 84316 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84316' 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 84316 00:24:20.714 01:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 84316 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:21.651 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:21.911 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:21.911 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.911 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.911 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.911 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:24:21.911 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:21.911 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:21.911 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:21.911 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:21.911 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:24:21.911 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:21.911 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:21.911 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:21.911 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:21.911 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:24:21.911 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:24:21.911 01:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:22.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:22.478 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:22.737 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:22.737 01:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.NI6 /tmp/spdk.key-null.rAM /tmp/spdk.key-sha256.yk4 /tmp/spdk.key-sha384.9ZP /tmp/spdk.key-sha512.Zu9 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:24:22.737 01:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:22.996 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:22.996 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:22.996 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:22.996 ************************************ 00:24:22.996 END TEST nvmf_auth_host 00:24:22.996 ************************************ 00:24:22.996 00:24:22.996 real 0m36.749s 00:24:22.996 user 0m34.280s 00:24:22.996 sys 0m4.075s 00:24:22.996 01:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:22.996 01:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.255 01:37:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:24:23.255 01:37:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:23.255 01:37:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:23.255 01:37:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:23.255 01:37:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.255 ************************************ 00:24:23.255 START TEST nvmf_digest 00:24:23.255 ************************************ 00:24:23.255 01:37:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:23.255 * Looking for test storage... 00:24:23.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:23.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.255 --rc genhtml_branch_coverage=1 00:24:23.255 --rc genhtml_function_coverage=1 00:24:23.255 --rc genhtml_legend=1 00:24:23.255 --rc geninfo_all_blocks=1 00:24:23.255 --rc geninfo_unexecuted_blocks=1 00:24:23.255 00:24:23.255 ' 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:23.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.255 --rc genhtml_branch_coverage=1 00:24:23.255 --rc genhtml_function_coverage=1 00:24:23.255 --rc genhtml_legend=1 00:24:23.255 --rc geninfo_all_blocks=1 00:24:23.255 --rc geninfo_unexecuted_blocks=1 00:24:23.255 00:24:23.255 ' 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:23.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.255 --rc genhtml_branch_coverage=1 00:24:23.255 --rc genhtml_function_coverage=1 00:24:23.255 --rc genhtml_legend=1 00:24:23.255 --rc geninfo_all_blocks=1 00:24:23.255 --rc geninfo_unexecuted_blocks=1 00:24:23.255 00:24:23.255 ' 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:23.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.255 --rc genhtml_branch_coverage=1 00:24:23.255 --rc genhtml_function_coverage=1 00:24:23.255 --rc genhtml_legend=1 00:24:23.255 --rc geninfo_all_blocks=1 00:24:23.255 --rc geninfo_unexecuted_blocks=1 00:24:23.255 00:24:23.255 ' 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:23.255 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@456 -- # nvmf_veth_init 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:23.255 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:23.256 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:23.256 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:23.256 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:23.256 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.256 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:23.256 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:23.256 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:23.256 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:23.256 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:23.514 Cannot find device "nvmf_init_br" 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:23.514 Cannot find device "nvmf_init_br2" 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:23.514 Cannot find device "nvmf_tgt_br" 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:23.514 Cannot find device "nvmf_tgt_br2" 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:23.514 Cannot find device "nvmf_init_br" 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:23.514 Cannot find device "nvmf_init_br2" 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:23.514 Cannot find device "nvmf_tgt_br" 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:23.514 Cannot find device "nvmf_tgt_br2" 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:23.514 Cannot find device "nvmf_br" 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:23.514 Cannot find device "nvmf_init_if" 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:23.514 Cannot find device "nvmf_init_if2" 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:23.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:23.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:23.514 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:23.772 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:23.772 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:24:23.772 00:24:23.772 --- 10.0.0.3 ping statistics --- 00:24:23.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.772 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:23.772 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:23.772 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:24:23.772 00:24:23.772 --- 10.0.0.4 ping statistics --- 00:24:23.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.772 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:23.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:24:23.772 00:24:23.772 --- 10.0.0.1 ping statistics --- 00:24:23.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.772 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:23.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:24:23.772 00:24:23.772 --- 10.0.0.2 ping statistics --- 00:24:23.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.772 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # return 0 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:23.772 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:23.773 ************************************ 00:24:23.773 START TEST nvmf_digest_clean 00:24:23.773 ************************************ 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=85985 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 85985 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 85985 ']' 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:23.773 01:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:24.031 [2024-09-28 01:37:19.707028] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:24.031 [2024-09-28 01:37:19.707424] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.031 [2024-09-28 01:37:19.878189] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.290 [2024-09-28 01:37:20.109074] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.290 [2024-09-28 01:37:20.109165] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.290 [2024-09-28 01:37:20.109204] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.290 [2024-09-28 01:37:20.109226] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.290 [2024-09-28 01:37:20.109243] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.290 [2024-09-28 01:37:20.109292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.857 01:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:24.857 01:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:24:24.857 01:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:24.857 01:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:24.857 01:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:24.857 01:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.857 01:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:24.857 01:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:24.857 01:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:24.857 01:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.857 01:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:25.117 [2024-09-28 01:37:20.884475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:25.117 null0 00:24:25.117 [2024-09-28 01:37:20.984013] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.117 [2024-09-28 01:37:21.008181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:25.117 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.117 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:25.117 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:25.117 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:25.117 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:25.117 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:25.117 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:25.117 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:25.117 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86017 00:24:25.117 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:25.117 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86017 /var/tmp/bperf.sock 00:24:25.117 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 86017 ']' 00:24:25.117 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:25.117 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:25.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:25.117 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:25.117 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:25.117 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:25.377 [2024-09-28 01:37:21.126742] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:25.377 [2024-09-28 01:37:21.126920] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86017 ] 00:24:25.377 [2024-09-28 01:37:21.299178] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.636 [2024-09-28 01:37:21.497578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.204 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:26.204 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:24:26.204 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:26.204 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:26.204 01:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:26.464 [2024-09-28 01:37:22.324128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:26.723 01:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:26.723 01:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:26.982 nvme0n1 00:24:26.982 01:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:26.982 01:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:26.982 Running I/O for 2 seconds... 00:24:29.295 14478.00 IOPS, 56.55 MiB/s 14605.00 IOPS, 57.05 MiB/s 00:24:29.296 Latency(us) 00:24:29.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.296 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:29.296 nvme0n1 : 2.01 14644.92 57.21 0.00 0.00 8733.99 8221.79 24069.59 00:24:29.296 =================================================================================================================== 00:24:29.296 Total : 14644.92 57.21 0.00 0.00 8733.99 8221.79 24069.59 00:24:29.296 { 00:24:29.296 "results": [ 00:24:29.296 { 00:24:29.296 "job": "nvme0n1", 00:24:29.296 "core_mask": "0x2", 00:24:29.296 "workload": "randread", 00:24:29.296 "status": "finished", 00:24:29.296 "queue_depth": 128, 00:24:29.296 "io_size": 4096, 00:24:29.296 "runtime": 2.011961, 00:24:29.296 "iops": 14644.916079387225, 00:24:29.296 "mibps": 57.20670343510635, 00:24:29.296 "io_failed": 0, 00:24:29.296 "io_timeout": 0, 00:24:29.296 "avg_latency_us": 8733.990595683632, 00:24:29.296 "min_latency_us": 8221.789090909091, 00:24:29.296 "max_latency_us": 24069.585454545453 00:24:29.296 } 00:24:29.296 ], 00:24:29.296 "core_count": 1 00:24:29.296 } 00:24:29.296 01:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:29.296 01:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:29.296 01:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:29.296 01:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:29.296 01:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:29.296 | select(.opcode=="crc32c") 00:24:29.296 | "\(.module_name) \(.executed)"' 00:24:29.296 01:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:29.296 01:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:29.296 01:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:29.296 01:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:29.296 01:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86017 00:24:29.296 01:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 86017 ']' 00:24:29.296 01:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 86017 00:24:29.296 01:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:24:29.296 01:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:29.296 01:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86017 00:24:29.296 01:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:29.296 killing process with pid 86017 00:24:29.296 01:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:29.296 01:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86017' 00:24:29.296 Received shutdown signal, test time was about 2.000000 seconds 00:24:29.296 00:24:29.296 Latency(us) 00:24:29.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.296 =================================================================================================================== 00:24:29.296 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:29.296 01:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 86017 00:24:29.296 01:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 86017 00:24:30.233 01:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:30.233 01:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:30.233 01:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:30.233 01:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:30.233 01:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:30.233 01:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:30.233 01:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:30.233 01:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86085 00:24:30.233 01:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:30.233 01:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86085 /var/tmp/bperf.sock 00:24:30.233 01:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 86085 ']' 00:24:30.233 01:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:30.233 01:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.233 01:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:30.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:30.233 01:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.233 01:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:30.492 [2024-09-28 01:37:26.165336] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:30.492 [2024-09-28 01:37:26.165517] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86085 ] 00:24:30.492 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:30.492 Zero copy mechanism will not be used. 00:24:30.492 [2024-09-28 01:37:26.321843] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.751 [2024-09-28 01:37:26.484867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.319 01:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:31.319 01:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:24:31.319 01:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:31.319 01:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:31.319 01:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:31.578 [2024-09-28 01:37:27.382378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:31.578 01:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:31.578 01:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:32.145 nvme0n1 00:24:32.145 01:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:32.145 01:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:32.145 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:32.145 Zero copy mechanism will not be used. 00:24:32.145 Running I/O for 2 seconds... 00:24:34.275 7120.00 IOPS, 890.00 MiB/s 7168.00 IOPS, 896.00 MiB/s 00:24:34.276 Latency(us) 00:24:34.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.276 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:34.276 nvme0n1 : 2.00 7164.36 895.54 0.00 0.00 2229.70 2025.66 10902.81 00:24:34.276 =================================================================================================================== 00:24:34.276 Total : 7164.36 895.54 0.00 0.00 2229.70 2025.66 10902.81 00:24:34.276 { 00:24:34.276 "results": [ 00:24:34.276 { 00:24:34.276 "job": "nvme0n1", 00:24:34.276 "core_mask": "0x2", 00:24:34.276 "workload": "randread", 00:24:34.276 "status": "finished", 00:24:34.276 "queue_depth": 16, 00:24:34.276 "io_size": 131072, 00:24:34.276 "runtime": 2.00325, 00:24:34.276 "iops": 7164.357918382629, 00:24:34.276 "mibps": 895.5447397978286, 00:24:34.276 "io_failed": 0, 00:24:34.276 "io_timeout": 0, 00:24:34.276 "avg_latency_us": 2229.6985101854666, 00:24:34.276 "min_latency_us": 2025.658181818182, 00:24:34.276 "max_latency_us": 10902.807272727272 00:24:34.276 } 00:24:34.276 ], 00:24:34.276 "core_count": 1 00:24:34.276 } 00:24:34.276 01:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:34.276 01:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:34.276 01:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:34.276 01:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:34.276 | select(.opcode=="crc32c") 00:24:34.276 | "\(.module_name) \(.executed)"' 00:24:34.276 01:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:34.535 01:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:34.535 01:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:34.535 01:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:34.535 01:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:34.535 01:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86085 00:24:34.535 01:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 86085 ']' 00:24:34.535 01:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 86085 00:24:34.535 01:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:24:34.535 01:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:34.535 01:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86085 00:24:34.535 01:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:34.535 01:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:34.535 killing process with pid 86085 00:24:34.535 01:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86085' 00:24:34.535 Received shutdown signal, test time was about 2.000000 seconds 00:24:34.535 00:24:34.535 Latency(us) 00:24:34.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.535 =================================================================================================================== 00:24:34.535 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.535 01:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 86085 00:24:34.535 01:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 86085 00:24:35.471 01:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:35.471 01:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:35.471 01:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:35.471 01:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:35.471 01:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:35.471 01:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:35.471 01:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:35.471 01:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86154 00:24:35.471 01:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:35.471 01:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86154 /var/tmp/bperf.sock 00:24:35.471 01:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 86154 ']' 00:24:35.471 01:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:35.471 01:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:35.471 01:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:35.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:35.471 01:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:35.471 01:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:35.471 [2024-09-28 01:37:31.259271] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:35.471 [2024-09-28 01:37:31.259438] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86154 ] 00:24:35.730 [2024-09-28 01:37:31.418219] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.730 [2024-09-28 01:37:31.581213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.297 01:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:36.298 01:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:24:36.298 01:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:36.298 01:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:36.298 01:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:36.866 [2024-09-28 01:37:32.568037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:36.866 01:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:36.866 01:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:37.124 nvme0n1 00:24:37.124 01:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:37.124 01:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:37.382 Running I/O for 2 seconds... 00:24:39.290 15495.00 IOPS, 60.53 MiB/s 15621.50 IOPS, 61.02 MiB/s 00:24:39.290 Latency(us) 00:24:39.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.290 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:39.290 nvme0n1 : 2.01 15625.26 61.04 0.00 0.00 8174.09 4021.53 18945.86 00:24:39.290 =================================================================================================================== 00:24:39.290 Total : 15625.26 61.04 0.00 0.00 8174.09 4021.53 18945.86 00:24:39.290 { 00:24:39.290 "results": [ 00:24:39.290 { 00:24:39.290 "job": "nvme0n1", 00:24:39.290 "core_mask": "0x2", 00:24:39.290 "workload": "randwrite", 00:24:39.290 "status": "finished", 00:24:39.290 "queue_depth": 128, 00:24:39.290 "io_size": 4096, 00:24:39.290 "runtime": 2.009055, 00:24:39.290 "iops": 15625.256650514793, 00:24:39.290 "mibps": 61.03615879107341, 00:24:39.290 "io_failed": 0, 00:24:39.290 "io_timeout": 0, 00:24:39.290 "avg_latency_us": 8174.093047910294, 00:24:39.290 "min_latency_us": 4021.5272727272727, 00:24:39.290 "max_latency_us": 18945.861818181816 00:24:39.290 } 00:24:39.290 ], 00:24:39.290 "core_count": 1 00:24:39.290 } 00:24:39.290 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:39.290 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:39.290 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:39.290 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:39.290 | select(.opcode=="crc32c") 00:24:39.290 | "\(.module_name) \(.executed)"' 00:24:39.290 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:39.549 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:39.549 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:39.549 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:39.549 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:39.549 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86154 00:24:39.549 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 86154 ']' 00:24:39.549 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 86154 00:24:39.549 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:24:39.549 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:39.549 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86154 00:24:39.549 killing process with pid 86154 00:24:39.549 Received shutdown signal, test time was about 2.000000 seconds 00:24:39.549 00:24:39.549 Latency(us) 00:24:39.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.549 =================================================================================================================== 00:24:39.549 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:39.549 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:39.549 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:39.549 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86154' 00:24:39.549 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 86154 00:24:39.549 01:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 86154 00:24:40.485 01:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:40.485 01:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:40.485 01:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:40.485 01:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:40.485 01:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:40.485 01:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:40.486 01:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:40.486 01:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86225 00:24:40.486 01:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:40.486 01:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86225 /var/tmp/bperf.sock 00:24:40.486 01:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 86225 ']' 00:24:40.486 01:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:40.486 01:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:40.486 01:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:40.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:40.486 01:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:40.486 01:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:40.745 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:40.745 Zero copy mechanism will not be used. 00:24:40.745 [2024-09-28 01:37:36.437328] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:40.745 [2024-09-28 01:37:36.437507] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86225 ] 00:24:40.745 [2024-09-28 01:37:36.606302] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.004 [2024-09-28 01:37:36.760824] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.572 01:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:41.572 01:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:24:41.572 01:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:41.572 01:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:41.572 01:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:41.832 [2024-09-28 01:37:37.699056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:42.090 01:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:42.090 01:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:42.348 nvme0n1 00:24:42.348 01:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:42.348 01:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:42.348 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:42.348 Zero copy mechanism will not be used. 00:24:42.348 Running I/O for 2 seconds... 00:24:44.665 5675.00 IOPS, 709.38 MiB/s 5687.50 IOPS, 710.94 MiB/s 00:24:44.665 Latency(us) 00:24:44.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.665 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:44.665 nvme0n1 : 2.00 5682.75 710.34 0.00 0.00 2808.32 2263.97 8340.95 00:24:44.665 =================================================================================================================== 00:24:44.665 Total : 5682.75 710.34 0.00 0.00 2808.32 2263.97 8340.95 00:24:44.665 { 00:24:44.665 "results": [ 00:24:44.665 { 00:24:44.665 "job": "nvme0n1", 00:24:44.665 "core_mask": "0x2", 00:24:44.665 "workload": "randwrite", 00:24:44.665 "status": "finished", 00:24:44.665 "queue_depth": 16, 00:24:44.665 "io_size": 131072, 00:24:44.665 "runtime": 2.004664, 00:24:44.665 "iops": 5682.7478320556465, 00:24:44.665 "mibps": 710.3434790069558, 00:24:44.665 "io_failed": 0, 00:24:44.665 "io_timeout": 0, 00:24:44.665 "avg_latency_us": 2808.323922369765, 00:24:44.665 "min_latency_us": 2263.970909090909, 00:24:44.665 "max_latency_us": 8340.945454545454 00:24:44.665 } 00:24:44.665 ], 00:24:44.665 "core_count": 1 00:24:44.665 } 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:44.665 | select(.opcode=="crc32c") 00:24:44.665 | "\(.module_name) \(.executed)"' 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86225 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 86225 ']' 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 86225 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86225 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:44.665 killing process with pid 86225 00:24:44.665 Received shutdown signal, test time was about 2.000000 seconds 00:24:44.665 00:24:44.665 Latency(us) 00:24:44.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.665 =================================================================================================================== 00:24:44.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86225' 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 86225 00:24:44.665 01:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 86225 00:24:45.604 01:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 85985 00:24:45.604 01:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 85985 ']' 00:24:45.604 01:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 85985 00:24:45.604 01:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:24:45.604 01:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:45.604 01:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85985 00:24:45.863 killing process with pid 85985 00:24:45.863 01:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:45.863 01:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:45.863 01:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85985' 00:24:45.863 01:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 85985 00:24:45.863 01:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 85985 00:24:46.800 00:24:46.800 real 0m22.879s 00:24:46.800 user 0m43.941s 00:24:46.800 sys 0m4.484s 00:24:46.800 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:46.800 ************************************ 00:24:46.800 END TEST nvmf_digest_clean 00:24:46.800 ************************************ 00:24:46.800 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.800 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:46.801 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:46.801 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:46.801 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:46.801 ************************************ 00:24:46.801 START TEST nvmf_digest_error 00:24:46.801 ************************************ 00:24:46.801 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:24:46.801 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:46.801 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:46.801 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:46.801 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:46.801 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=86324 00:24:46.801 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:46.801 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 86324 00:24:46.801 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 86324 ']' 00:24:46.801 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.801 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:46.801 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.801 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:46.801 01:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:46.801 [2024-09-28 01:37:42.613670] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:46.801 [2024-09-28 01:37:42.614011] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.060 [2024-09-28 01:37:42.773219] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.060 [2024-09-28 01:37:42.922017] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.060 [2024-09-28 01:37:42.922085] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.060 [2024-09-28 01:37:42.922103] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.060 [2024-09-28 01:37:42.922119] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.060 [2024-09-28 01:37:42.922129] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.060 [2024-09-28 01:37:42.922164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:47.997 [2024-09-28 01:37:43.607002] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:47.997 [2024-09-28 01:37:43.765349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:47.997 null0 00:24:47.997 [2024-09-28 01:37:43.866922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.997 [2024-09-28 01:37:43.891161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86365 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86365 /var/tmp/bperf.sock 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 86365 ']' 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:47.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:47.997 01:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:48.257 [2024-09-28 01:37:44.010541] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:48.257 [2024-09-28 01:37:44.010903] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86365 ] 00:24:48.257 [2024-09-28 01:37:44.179660] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.516 [2024-09-28 01:37:44.330711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.775 [2024-09-28 01:37:44.475200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:49.342 01:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:49.342 01:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:24:49.342 01:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:49.342 01:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:49.342 01:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:49.342 01:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.342 01:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:49.342 01:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.342 01:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:49.342 01:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:49.910 nvme0n1 00:24:49.910 01:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:49.910 01:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.910 01:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:49.910 01:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.910 01:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:49.910 01:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:49.910 Running I/O for 2 seconds... 00:24:49.910 [2024-09-28 01:37:45.715375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:49.910 [2024-09-28 01:37:45.715601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.910 [2024-09-28 01:37:45.715627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.910 [2024-09-28 01:37:45.733090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:49.910 [2024-09-28 01:37:45.733151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.910 [2024-09-28 01:37:45.733173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.910 [2024-09-28 01:37:45.750480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:49.910 [2024-09-28 01:37:45.750756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.910 [2024-09-28 01:37:45.750789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.910 [2024-09-28 01:37:45.771331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:49.910 [2024-09-28 01:37:45.771425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.910 [2024-09-28 01:37:45.771445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.910 [2024-09-28 01:37:45.790905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:49.910 [2024-09-28 01:37:45.791137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.910 [2024-09-28 01:37:45.791173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.910 [2024-09-28 01:37:45.809039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:49.910 [2024-09-28 01:37:45.809247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.910 [2024-09-28 01:37:45.809272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.910 [2024-09-28 01:37:45.826578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:49.910 [2024-09-28 01:37:45.826768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.910 [2024-09-28 01:37:45.826797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.169 [2024-09-28 01:37:45.845087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.169 [2024-09-28 01:37:45.845164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.169 [2024-09-28 01:37:45.845186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.169 [2024-09-28 01:37:45.862661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.169 [2024-09-28 01:37:45.862711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.169 [2024-09-28 01:37:45.862730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.169 [2024-09-28 01:37:45.880112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.169 [2024-09-28 01:37:45.880173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.169 [2024-09-28 01:37:45.880194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.169 [2024-09-28 01:37:45.897578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.169 [2024-09-28 01:37:45.897636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.169 [2024-09-28 01:37:45.897658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.169 [2024-09-28 01:37:45.914939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.169 [2024-09-28 01:37:45.915026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.169 [2024-09-28 01:37:45.915046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.169 [2024-09-28 01:37:45.932326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.169 [2024-09-28 01:37:45.932386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.169 [2024-09-28 01:37:45.932408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.169 [2024-09-28 01:37:45.949656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.169 [2024-09-28 01:37:45.949714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.169 [2024-09-28 01:37:45.949735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.169 [2024-09-28 01:37:45.967047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.169 [2024-09-28 01:37:45.967244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.169 [2024-09-28 01:37:45.967269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.169 [2024-09-28 01:37:45.984555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.169 [2024-09-28 01:37:45.984613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.169 [2024-09-28 01:37:45.984635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.169 [2024-09-28 01:37:46.001812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.169 [2024-09-28 01:37:46.001871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.169 [2024-09-28 01:37:46.001892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.169 [2024-09-28 01:37:46.019020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.169 [2024-09-28 01:37:46.019072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.169 [2024-09-28 01:37:46.019091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.169 [2024-09-28 01:37:46.036289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.169 [2024-09-28 01:37:46.036349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.169 [2024-09-28 01:37:46.036370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.169 [2024-09-28 01:37:46.053973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.169 [2024-09-28 01:37:46.054034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.169 [2024-09-28 01:37:46.054055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.169 [2024-09-28 01:37:46.071320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.169 [2024-09-28 01:37:46.071557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.169 [2024-09-28 01:37:46.071581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.169 [2024-09-28 01:37:46.088862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.169 [2024-09-28 01:37:46.088923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.169 [2024-09-28 01:37:46.088947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.428 [2024-09-28 01:37:46.107468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.428 [2024-09-28 01:37:46.107539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.428 [2024-09-28 01:37:46.107562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.428 [2024-09-28 01:37:46.124801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.428 [2024-09-28 01:37:46.124866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.428 [2024-09-28 01:37:46.124884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.428 [2024-09-28 01:37:46.142135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.428 [2024-09-28 01:37:46.142334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.428 [2024-09-28 01:37:46.142364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.428 [2024-09-28 01:37:46.159647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.428 [2024-09-28 01:37:46.159706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.428 [2024-09-28 01:37:46.159727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.428 [2024-09-28 01:37:46.176843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.428 [2024-09-28 01:37:46.176908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.428 [2024-09-28 01:37:46.176927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.428 [2024-09-28 01:37:46.194204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.428 [2024-09-28 01:37:46.194264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.428 [2024-09-28 01:37:46.194285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.428 [2024-09-28 01:37:46.211440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.428 [2024-09-28 01:37:46.211652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.428 [2024-09-28 01:37:46.211683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.428 [2024-09-28 01:37:46.228893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.428 [2024-09-28 01:37:46.228958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.428 [2024-09-28 01:37:46.228977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.428 [2024-09-28 01:37:46.246039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.428 [2024-09-28 01:37:46.246098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.428 [2024-09-28 01:37:46.246119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.428 [2024-09-28 01:37:46.263658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.428 [2024-09-28 01:37:46.263703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.428 [2024-09-28 01:37:46.263724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.428 [2024-09-28 01:37:46.280885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.428 [2024-09-28 01:37:46.280949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.428 [2024-09-28 01:37:46.280968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.428 [2024-09-28 01:37:46.300084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.428 [2024-09-28 01:37:46.300148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.428 [2024-09-28 01:37:46.300170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.428 [2024-09-28 01:37:46.320127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.428 [2024-09-28 01:37:46.320193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.428 [2024-09-28 01:37:46.320212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.428 [2024-09-28 01:37:46.338488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.428 [2024-09-28 01:37:46.338548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.428 [2024-09-28 01:37:46.338569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.428 [2024-09-28 01:37:46.357046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.428 [2024-09-28 01:37:46.357108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.428 [2024-09-28 01:37:46.357130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.687 [2024-09-28 01:37:46.376369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.687 [2024-09-28 01:37:46.376436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.687 [2024-09-28 01:37:46.376455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.687 [2024-09-28 01:37:46.394722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.687 [2024-09-28 01:37:46.394768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.687 [2024-09-28 01:37:46.394792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.687 [2024-09-28 01:37:46.412952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.687 [2024-09-28 01:37:46.413199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.687 [2024-09-28 01:37:46.413242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.687 [2024-09-28 01:37:46.431623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.687 [2024-09-28 01:37:46.431684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.687 [2024-09-28 01:37:46.431706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.687 [2024-09-28 01:37:46.449840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.687 [2024-09-28 01:37:46.449899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.687 [2024-09-28 01:37:46.449921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.687 [2024-09-28 01:37:46.468059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.687 [2024-09-28 01:37:46.468125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.687 [2024-09-28 01:37:46.468143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.687 [2024-09-28 01:37:46.486367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.687 [2024-09-28 01:37:46.486429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.687 [2024-09-28 01:37:46.486451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.687 [2024-09-28 01:37:46.504763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.687 [2024-09-28 01:37:46.504809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.687 [2024-09-28 01:37:46.504827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.687 [2024-09-28 01:37:46.523329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.687 [2024-09-28 01:37:46.523415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.687 [2024-09-28 01:37:46.523435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.687 [2024-09-28 01:37:46.541115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.688 [2024-09-28 01:37:46.541316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.688 [2024-09-28 01:37:46.541347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.688 [2024-09-28 01:37:46.558966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.688 [2024-09-28 01:37:46.559196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.688 [2024-09-28 01:37:46.559221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.688 [2024-09-28 01:37:46.576413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.688 [2024-09-28 01:37:46.576490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.688 [2024-09-28 01:37:46.576510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.688 [2024-09-28 01:37:46.593630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.688 [2024-09-28 01:37:46.593676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.688 [2024-09-28 01:37:46.593696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.688 [2024-09-28 01:37:46.610756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.688 [2024-09-28 01:37:46.610956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.688 [2024-09-28 01:37:46.611005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.947 [2024-09-28 01:37:46.629689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.947 [2024-09-28 01:37:46.629741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.947 [2024-09-28 01:37:46.629759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.947 [2024-09-28 01:37:46.646931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.947 [2024-09-28 01:37:46.647013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.947 [2024-09-28 01:37:46.647051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.947 [2024-09-28 01:37:46.664210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.947 [2024-09-28 01:37:46.664274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.947 [2024-09-28 01:37:46.664292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.947 [2024-09-28 01:37:46.681593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.947 [2024-09-28 01:37:46.681658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.947 [2024-09-28 01:37:46.681676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.947 14042.00 IOPS, 54.85 MiB/s [2024-09-28 01:37:46.700343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.947 [2024-09-28 01:37:46.700402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.947 [2024-09-28 01:37:46.700425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.947 [2024-09-28 01:37:46.717591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.947 [2024-09-28 01:37:46.717656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.947 [2024-09-28 01:37:46.717674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.947 [2024-09-28 01:37:46.735083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.947 [2024-09-28 01:37:46.735274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.947 [2024-09-28 01:37:46.735305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.947 [2024-09-28 01:37:46.752594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.947 [2024-09-28 01:37:46.752655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.947 [2024-09-28 01:37:46.752676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.947 [2024-09-28 01:37:46.769651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.947 [2024-09-28 01:37:46.769716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.947 [2024-09-28 01:37:46.769734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.947 [2024-09-28 01:37:46.789724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.947 [2024-09-28 01:37:46.789817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.947 [2024-09-28 01:37:46.789872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.947 [2024-09-28 01:37:46.809914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.947 [2024-09-28 01:37:46.809979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.947 [2024-09-28 01:37:46.809997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.947 [2024-09-28 01:37:46.827794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.947 [2024-09-28 01:37:46.827852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.947 [2024-09-28 01:37:46.827872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.947 [2024-09-28 01:37:46.852377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.947 [2024-09-28 01:37:46.852452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.947 [2024-09-28 01:37:46.852472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.947 [2024-09-28 01:37:46.869511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:50.947 [2024-09-28 01:37:46.869570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.947 [2024-09-28 01:37:46.869590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.207 [2024-09-28 01:37:46.888413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.207 [2024-09-28 01:37:46.888482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.207 [2024-09-28 01:37:46.888505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.207 [2024-09-28 01:37:46.906159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.207 [2024-09-28 01:37:46.906224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.207 [2024-09-28 01:37:46.906242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.207 [2024-09-28 01:37:46.923477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.207 [2024-09-28 01:37:46.923548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.207 [2024-09-28 01:37:46.923569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.207 [2024-09-28 01:37:46.940701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.207 [2024-09-28 01:37:46.940761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.207 [2024-09-28 01:37:46.940783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.207 [2024-09-28 01:37:46.958089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.207 [2024-09-28 01:37:46.958154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.207 [2024-09-28 01:37:46.958172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.207 [2024-09-28 01:37:46.975292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.207 [2024-09-28 01:37:46.975384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.207 [2024-09-28 01:37:46.975403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.207 [2024-09-28 01:37:46.992621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.207 [2024-09-28 01:37:46.992680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.207 [2024-09-28 01:37:46.992700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.207 [2024-09-28 01:37:47.010161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.207 [2024-09-28 01:37:47.010229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.207 [2024-09-28 01:37:47.010256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.207 [2024-09-28 01:37:47.027783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.207 [2024-09-28 01:37:47.027843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.207 [2024-09-28 01:37:47.027864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.207 [2024-09-28 01:37:47.045381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.207 [2024-09-28 01:37:47.045439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.207 [2024-09-28 01:37:47.045471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.207 [2024-09-28 01:37:47.062728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.207 [2024-09-28 01:37:47.062779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.207 [2024-09-28 01:37:47.062797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.207 [2024-09-28 01:37:47.080173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.207 [2024-09-28 01:37:47.080231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.207 [2024-09-28 01:37:47.080251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.207 [2024-09-28 01:37:47.097414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.207 [2024-09-28 01:37:47.097483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.207 [2024-09-28 01:37:47.097504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.207 [2024-09-28 01:37:47.114685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.207 [2024-09-28 01:37:47.114747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.207 [2024-09-28 01:37:47.114765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.207 [2024-09-28 01:37:47.131854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.207 [2024-09-28 01:37:47.131913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.207 [2024-09-28 01:37:47.131932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.466 [2024-09-28 01:37:47.150441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.466 [2024-09-28 01:37:47.150512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.466 [2024-09-28 01:37:47.150533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.466 [2024-09-28 01:37:47.167836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.466 [2024-09-28 01:37:47.167900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.466 [2024-09-28 01:37:47.167918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.466 [2024-09-28 01:37:47.185100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.466 [2024-09-28 01:37:47.185158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.466 [2024-09-28 01:37:47.185182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.466 [2024-09-28 01:37:47.202317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.466 [2024-09-28 01:37:47.202377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.466 [2024-09-28 01:37:47.202397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.466 [2024-09-28 01:37:47.219613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.466 [2024-09-28 01:37:47.219677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.466 [2024-09-28 01:37:47.219695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.466 [2024-09-28 01:37:47.236784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.466 [2024-09-28 01:37:47.236843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.466 [2024-09-28 01:37:47.236862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.466 [2024-09-28 01:37:47.254046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.466 [2024-09-28 01:37:47.254105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.466 [2024-09-28 01:37:47.254125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.466 [2024-09-28 01:37:47.271175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.466 [2024-09-28 01:37:47.271226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.466 [2024-09-28 01:37:47.271244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.466 [2024-09-28 01:37:47.288473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.466 [2024-09-28 01:37:47.288531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.467 [2024-09-28 01:37:47.288550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.467 [2024-09-28 01:37:47.306098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.467 [2024-09-28 01:37:47.306157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.467 [2024-09-28 01:37:47.306178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.467 [2024-09-28 01:37:47.323395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.467 [2024-09-28 01:37:47.323486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.467 [2024-09-28 01:37:47.323506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.467 [2024-09-28 01:37:47.340550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.467 [2024-09-28 01:37:47.340609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.467 [2024-09-28 01:37:47.340629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.467 [2024-09-28 01:37:47.357701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.467 [2024-09-28 01:37:47.357760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.467 [2024-09-28 01:37:47.357780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.467 [2024-09-28 01:37:47.374800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.467 [2024-09-28 01:37:47.374857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.467 [2024-09-28 01:37:47.374874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.467 [2024-09-28 01:37:47.392177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.467 [2024-09-28 01:37:47.392236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.467 [2024-09-28 01:37:47.392253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.726 [2024-09-28 01:37:47.410853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.726 [2024-09-28 01:37:47.410911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.726 [2024-09-28 01:37:47.410927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.726 [2024-09-28 01:37:47.428305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.726 [2024-09-28 01:37:47.428363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.726 [2024-09-28 01:37:47.428380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.726 [2024-09-28 01:37:47.445362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.726 [2024-09-28 01:37:47.445421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.726 [2024-09-28 01:37:47.445437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.726 [2024-09-28 01:37:47.462595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.726 [2024-09-28 01:37:47.462653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.726 [2024-09-28 01:37:47.462669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.726 [2024-09-28 01:37:47.479842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.726 [2024-09-28 01:37:47.479900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.726 [2024-09-28 01:37:47.479917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.726 [2024-09-28 01:37:47.496977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.726 [2024-09-28 01:37:47.497036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.726 [2024-09-28 01:37:47.497052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.726 [2024-09-28 01:37:47.514260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.726 [2024-09-28 01:37:47.514319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.726 [2024-09-28 01:37:47.514336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.726 [2024-09-28 01:37:47.534079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.726 [2024-09-28 01:37:47.534126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.726 [2024-09-28 01:37:47.534144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.726 [2024-09-28 01:37:47.554592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.726 [2024-09-28 01:37:47.554639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.727 [2024-09-28 01:37:47.554656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.727 [2024-09-28 01:37:47.573431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.727 [2024-09-28 01:37:47.573517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.727 [2024-09-28 01:37:47.573534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.727 [2024-09-28 01:37:47.591746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.727 [2024-09-28 01:37:47.591791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.727 [2024-09-28 01:37:47.591807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.727 [2024-09-28 01:37:47.609963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.727 [2024-09-28 01:37:47.610033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.727 [2024-09-28 01:37:47.610051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.727 [2024-09-28 01:37:47.628125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.727 [2024-09-28 01:37:47.628184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.727 [2024-09-28 01:37:47.628201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.727 [2024-09-28 01:37:47.646321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.727 [2024-09-28 01:37:47.646380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.727 [2024-09-28 01:37:47.646397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.986 [2024-09-28 01:37:47.665975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.986 [2024-09-28 01:37:47.666020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.986 [2024-09-28 01:37:47.666037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.986 [2024-09-28 01:37:47.684379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.986 [2024-09-28 01:37:47.684440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.986 [2024-09-28 01:37:47.684468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.986 14168.50 IOPS, 55.35 MiB/s 00:24:51.986 Latency(us) 00:24:51.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.986 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:51.986 nvme0n1 : 2.01 14192.99 55.44 0.00 0.00 9012.43 8340.95 33125.47 00:24:51.986 =================================================================================================================== 00:24:51.986 Total : 14192.99 55.44 0.00 0.00 9012.43 8340.95 33125.47 00:24:51.986 { 00:24:51.986 "results": [ 00:24:51.986 { 00:24:51.986 "job": "nvme0n1", 00:24:51.986 "core_mask": "0x2", 00:24:51.986 "workload": "randread", 00:24:51.986 "status": "finished", 00:24:51.986 "queue_depth": 128, 00:24:51.986 "io_size": 4096, 00:24:51.986 "runtime": 2.005568, 00:24:51.986 "iops": 14192.986724957718, 00:24:51.986 "mibps": 55.44135439436609, 00:24:51.986 "io_failed": 0, 00:24:51.986 "io_timeout": 0, 00:24:51.986 "avg_latency_us": 9012.431226610031, 00:24:51.986 "min_latency_us": 8340.945454545454, 00:24:51.986 "max_latency_us": 33125.46909090909 00:24:51.986 } 00:24:51.986 ], 00:24:51.986 "core_count": 1 00:24:51.986 } 00:24:51.986 01:37:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:51.986 01:37:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:51.986 01:37:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:51.986 01:37:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:51.986 | .driver_specific 00:24:51.986 | .nvme_error 00:24:51.986 | .status_code 00:24:51.986 | .command_transient_transport_error' 00:24:52.246 01:37:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 111 > 0 )) 00:24:52.246 01:37:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86365 00:24:52.246 01:37:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 86365 ']' 00:24:52.246 01:37:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 86365 00:24:52.246 01:37:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:24:52.246 01:37:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:52.246 01:37:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86365 00:24:52.246 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:52.246 killing process with pid 86365 00:24:52.246 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:52.246 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86365' 00:24:52.246 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 86365 00:24:52.246 Received shutdown signal, test time was about 2.000000 seconds 00:24:52.246 00:24:52.246 Latency(us) 00:24:52.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.246 =================================================================================================================== 00:24:52.246 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:52.246 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 86365 00:24:53.184 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:53.184 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:53.184 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:53.184 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:53.184 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:53.184 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86432 00:24:53.184 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:53.185 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86432 /var/tmp/bperf.sock 00:24:53.185 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 86432 ']' 00:24:53.185 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:53.185 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:53.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:53.185 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:53.185 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:53.185 01:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:53.185 [2024-09-28 01:37:48.976015] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:53.185 [2024-09-28 01:37:48.976169] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86432 ] 00:24:53.185 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:53.185 Zero copy mechanism will not be used. 00:24:53.443 [2024-09-28 01:37:49.133588] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.443 [2024-09-28 01:37:49.283717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.702 [2024-09-28 01:37:49.428677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:53.961 01:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:53.961 01:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:24:53.961 01:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:53.961 01:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:54.219 01:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:54.219 01:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.219 01:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:54.219 01:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.219 01:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:54.219 01:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:54.787 nvme0n1 00:24:54.787 01:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:54.787 01:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.787 01:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:54.787 01:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.787 01:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:54.787 01:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:54.787 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:54.787 Zero copy mechanism will not be used. 00:24:54.787 Running I/O for 2 seconds... 00:24:54.787 [2024-09-28 01:37:50.631928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.787 [2024-09-28 01:37:50.632023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.787 [2024-09-28 01:37:50.632045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.787 [2024-09-28 01:37:50.636779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.787 [2024-09-28 01:37:50.636872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.787 [2024-09-28 01:37:50.636896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.787 [2024-09-28 01:37:50.641653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.787 [2024-09-28 01:37:50.641729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.787 [2024-09-28 01:37:50.641751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.787 [2024-09-28 01:37:50.646176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.787 [2024-09-28 01:37:50.646258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.787 [2024-09-28 01:37:50.646277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.787 [2024-09-28 01:37:50.650892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.787 [2024-09-28 01:37:50.650980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.787 [2024-09-28 01:37:50.651017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.787 [2024-09-28 01:37:50.655522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.787 [2024-09-28 01:37:50.655594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.787 [2024-09-28 01:37:50.655615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.787 [2024-09-28 01:37:50.660043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.787 [2024-09-28 01:37:50.660117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.787 [2024-09-28 01:37:50.660138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.787 [2024-09-28 01:37:50.664832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.787 [2024-09-28 01:37:50.664913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.787 [2024-09-28 01:37:50.664931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.787 [2024-09-28 01:37:50.669414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.788 [2024-09-28 01:37:50.669510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.788 [2024-09-28 01:37:50.669530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.788 [2024-09-28 01:37:50.673983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.788 [2024-09-28 01:37:50.674057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.788 [2024-09-28 01:37:50.674078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.788 [2024-09-28 01:37:50.678591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.788 [2024-09-28 01:37:50.678665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.788 [2024-09-28 01:37:50.678686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.788 [2024-09-28 01:37:50.683234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.788 [2024-09-28 01:37:50.683304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.788 [2024-09-28 01:37:50.683324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.788 [2024-09-28 01:37:50.688036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.788 [2024-09-28 01:37:50.688112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.788 [2024-09-28 01:37:50.688132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.788 [2024-09-28 01:37:50.692681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.788 [2024-09-28 01:37:50.692740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.788 [2024-09-28 01:37:50.692777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.788 [2024-09-28 01:37:50.697321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.788 [2024-09-28 01:37:50.697403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.788 [2024-09-28 01:37:50.697421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.788 [2024-09-28 01:37:50.702093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.788 [2024-09-28 01:37:50.702175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.788 [2024-09-28 01:37:50.702194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.788 [2024-09-28 01:37:50.706795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.788 [2024-09-28 01:37:50.706869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.788 [2024-09-28 01:37:50.706889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.788 [2024-09-28 01:37:50.711393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.788 [2024-09-28 01:37:50.711476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.788 [2024-09-28 01:37:50.711501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.788 [2024-09-28 01:37:50.716345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:54.788 [2024-09-28 01:37:50.716427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.788 [2024-09-28 01:37:50.716445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.048 [2024-09-28 01:37:50.721540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.048 [2024-09-28 01:37:50.721604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.048 [2024-09-28 01:37:50.721622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.048 [2024-09-28 01:37:50.726547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.048 [2024-09-28 01:37:50.726622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.048 [2024-09-28 01:37:50.726644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.048 [2024-09-28 01:37:50.731084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.048 [2024-09-28 01:37:50.731164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.048 [2024-09-28 01:37:50.731187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.048 [2024-09-28 01:37:50.735775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.048 [2024-09-28 01:37:50.735845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.048 [2024-09-28 01:37:50.735864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.048 [2024-09-28 01:37:50.740646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.048 [2024-09-28 01:37:50.740729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.048 [2024-09-28 01:37:50.740748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.048 [2024-09-28 01:37:50.745253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.048 [2024-09-28 01:37:50.745327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.048 [2024-09-28 01:37:50.745349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.749874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.749948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.749969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.754428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.754519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.754538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.759001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.759084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.759103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.763616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.763690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.763710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.768148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.768230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.768248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.772847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.772927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.772945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.777550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.777624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.777647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.782081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.782155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.782176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.786753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.786832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.786851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.791273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.791371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.791405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.795971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.796046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.796066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.800548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.800622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.800644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.805076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.805157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.805175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.809803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.809884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.809902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.814337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.814411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.814432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.818892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.818962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.819008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.823510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.823603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.823621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.828041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.828115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.828136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.832789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.832863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.832883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.837411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.837503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.837523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.841978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.842063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.842082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.846661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.846735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.846755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.851265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.851310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.851347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.855971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.856052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.856070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.860843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.860910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.860930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.866150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.866210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.866231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.049 [2024-09-28 01:37:50.871670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.049 [2024-09-28 01:37:50.871748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.049 [2024-09-28 01:37:50.871773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.877587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.877671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.877693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.883237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.883348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.883381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.888783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.888889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.888912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.893894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.893969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.893989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.899040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.899111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.899130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.903802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.903882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.903900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.908484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.908558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.908579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.913134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.913208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.913228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.917898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.917964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.917997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.922552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.922622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.922643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.927226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.927273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.927294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.931867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.931942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.931965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.936531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.936613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.936631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.941162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.941244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.941263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.945876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.945934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.945970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.950570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.950646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.950667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.955439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.955533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.955553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.960130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.960213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.960231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.964823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.964897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.964917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.969368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.969449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.969478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.973956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.974039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.050 [2024-09-28 01:37:50.974066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.050 [2024-09-28 01:37:50.979225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.050 [2024-09-28 01:37:50.979290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.311 [2024-09-28 01:37:50.979312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.311 [2024-09-28 01:37:50.984328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.311 [2024-09-28 01:37:50.984419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.311 [2024-09-28 01:37:50.984456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.311 [2024-09-28 01:37:50.989436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.311 [2024-09-28 01:37:50.989531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.311 [2024-09-28 01:37:50.989554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.311 [2024-09-28 01:37:50.994077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.311 [2024-09-28 01:37:50.994160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.311 [2024-09-28 01:37:50.994179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.311 [2024-09-28 01:37:50.998637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.311 [2024-09-28 01:37:50.998696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.311 [2024-09-28 01:37:50.998719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.311 [2024-09-28 01:37:51.003518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.311 [2024-09-28 01:37:51.003603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.311 [2024-09-28 01:37:51.003625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.311 [2024-09-28 01:37:51.008137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.311 [2024-09-28 01:37:51.008218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.311 [2024-09-28 01:37:51.008236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.311 [2024-09-28 01:37:51.012819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.311 [2024-09-28 01:37:51.012901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.311 [2024-09-28 01:37:51.012919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.311 [2024-09-28 01:37:51.017437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.311 [2024-09-28 01:37:51.017522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.311 [2024-09-28 01:37:51.017543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.311 [2024-09-28 01:37:51.022083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.311 [2024-09-28 01:37:51.022157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.311 [2024-09-28 01:37:51.022178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.311 [2024-09-28 01:37:51.026783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.311 [2024-09-28 01:37:51.026850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.311 [2024-09-28 01:37:51.026880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.031626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.031693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.031711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.036206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.036266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.036286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.040893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.040952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.040977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.045504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.045567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.045584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.050089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.050150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.050167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.054603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.054659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.054679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.059150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.059198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.059216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.063859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.063922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.063939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.068501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.068556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.068575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.073122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.073177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.073198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.077795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.077859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.077877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.082404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.082491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.082510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.086907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.086962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.087020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.091507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.091573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.091593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.096059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.096120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.096138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.100761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.100822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.100839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.105291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.105347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.105369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.109882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.109947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.109964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.114497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.114558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.114575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.118968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.119064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.119083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.123655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.123711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.123731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.128205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.128268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.128286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.132930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.132991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.133008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.137581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.137635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.137655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.142228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.142283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.142305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.146854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.146916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.312 [2024-09-28 01:37:51.146934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.312 [2024-09-28 01:37:51.151567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.312 [2024-09-28 01:37:51.151627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.151645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.156034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.156089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.156109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.160680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.160736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.160756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.165289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.165352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.165369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.169835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.169889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.169910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.174459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.174522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.174542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.179104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.179153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.179172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.183719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.183779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.183796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.188304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.188360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.188380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.192939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.192997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.193016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.197576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.197637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.197654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.202131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.202192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.202209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.206717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.206788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.206807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.211306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.211394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.211416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.215925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.215980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.215996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.220521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.220577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.220594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.225059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.225114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.225131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.229765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.229819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.229835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.234354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.234409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.234426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.313 [2024-09-28 01:37:51.239096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.313 [2024-09-28 01:37:51.239154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.313 [2024-09-28 01:37:51.239172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.244392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.244448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.244491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.249297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.249353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.249370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.254340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.254397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.254414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.259326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.259427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.259443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.264286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.264342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.264358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.269647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.269705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.269722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.274714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.274773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.274804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.279872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.279928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.279944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.284924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.284980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.284996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.289922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.289979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.289996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.294858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.294915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.294931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.299591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.299647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.299663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.304322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.304379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.304396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.309376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.309435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.309452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.314154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.314212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.314228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.318867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.318923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.318940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.323738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.323795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.323811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.328496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.328553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.328570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.333343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.333400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.333416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.338226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.338283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.338300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.343060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.343118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.343135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.347777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.347833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.347849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.352474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.573 [2024-09-28 01:37:51.352530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.573 [2024-09-28 01:37:51.352546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.573 [2024-09-28 01:37:51.357226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.357283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.357300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.362353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.362425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.362442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.366999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.367071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.367088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.371900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.371957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.371974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.376593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.376649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.376665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.381300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.381357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.381373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.386278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.386335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.386351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.391049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.391092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.391109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.395762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.395818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.395834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.400430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.400496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.400513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.405107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.405164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.405181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.409803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.409860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.409876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.414674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.414732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.414748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.419614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.419663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.419710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.424385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.424442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.424469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.429124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.429182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.429199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.434148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.434204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.434221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.438921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.438998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.439031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.443765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.443822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.443839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.448578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.448635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.448652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.453520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.453576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.453592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.458224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.458281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.458297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.462932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.463027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.463045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.467714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.467771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.467787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.472496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.472585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.472604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.477390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.477448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.477476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.574 [2024-09-28 01:37:51.482159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.574 [2024-09-28 01:37:51.482216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.574 [2024-09-28 01:37:51.482232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.575 [2024-09-28 01:37:51.486934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.575 [2024-09-28 01:37:51.487030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.575 [2024-09-28 01:37:51.487048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.575 [2024-09-28 01:37:51.491682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.575 [2024-09-28 01:37:51.491738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.575 [2024-09-28 01:37:51.491754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.575 [2024-09-28 01:37:51.496478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.575 [2024-09-28 01:37:51.496533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.575 [2024-09-28 01:37:51.496549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.575 [2024-09-28 01:37:51.501525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.575 [2024-09-28 01:37:51.501594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.575 [2024-09-28 01:37:51.501611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.506941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.507043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.507063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.512229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.512285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.512301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.517325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.517383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.517400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.522242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.522297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.522313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.527140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.527199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.527216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.531809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.531879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.531895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.536559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.536615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.536631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.541311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.541369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.541386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.546114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.546171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.546187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.550875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.550932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.550949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.555765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.555823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.555840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.560438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.560521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.560537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.565317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.565374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.565390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.569892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.569947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.569964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.574514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.574570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.574586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.579035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.579093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.579110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.583674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.583727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.583759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.588226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.588281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.588297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.834 [2024-09-28 01:37:51.592889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.834 [2024-09-28 01:37:51.592943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.834 [2024-09-28 01:37:51.592959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.597455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.597521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.597538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.602003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.602058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.602074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.606594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.606648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.606664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.611124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.611167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.611184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.615692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.615746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.615762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.620225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.620280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.620297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.835 6495.00 IOPS, 811.88 MiB/s [2024-09-28 01:37:51.626414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.626483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.626501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.630918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.630981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.631015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.635545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.635599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.635615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.640191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.640247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.640263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.644871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.644926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.644943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.649538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.649593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.649608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.654111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.654166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.654183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.658644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.658699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.658715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.663249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.663308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.663340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.667920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.667976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.667991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.672505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.672561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.672577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.677093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.677149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.677166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.681765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.681820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.681836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.686337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.686392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.686408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.690856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.690911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.690926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.695543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.695597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.695613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.700051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.700106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.700122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.704673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.704729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.704744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.709342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.709398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.709414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.714093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.714148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.714164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.718673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.718728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.718745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.723394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.723449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.723476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.727936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.727991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.728007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.732505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.732545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.732561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.737100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.737156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.737172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.741780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.741835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.741851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.746325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.746380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.746396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.750805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.750860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.750877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.755414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.755481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.755497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.759925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.759981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.759997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.835 [2024-09-28 01:37:51.765042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:55.835 [2024-09-28 01:37:51.765097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.835 [2024-09-28 01:37:51.765113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.770134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.770203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.770221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.774881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.774937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.774953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.779605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.779659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.779674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.784080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.784148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.784165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.788720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.788776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.788793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.793281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.793337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.793363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.797921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.797975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.797991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.802525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.802580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.802596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.806970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.807050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.807067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.811540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.811594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.811609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.816014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.816069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.816086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.820575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.820631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.820647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.825148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.825204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.825220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.829716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.829771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.829787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.834353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.834408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.834424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.838899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.838954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.838969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.843565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.843620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.843636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.848023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.848078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.848095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.852619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.852674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.852690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.857193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.857249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.857265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.861769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.861824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.861840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.866368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.866425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.866441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.870929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.871038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.871056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.875679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.875734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.875750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.880643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.880703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.880722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.885860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.885916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.885932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.891156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.891216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.891234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.896651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.896711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.896729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.902045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.902102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.902118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.907405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.907490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.907510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.912583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.095 [2024-09-28 01:37:51.912638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.095 [2024-09-28 01:37:51.912655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.095 [2024-09-28 01:37:51.917638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.917696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.917712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:51.922465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.922531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.922547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:51.926967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.927066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.927083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:51.931802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.931876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.931892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:51.936556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.936611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.936627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:51.941197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.941254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.941271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:51.945816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.945873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.945889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:51.950460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.950525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.950542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:51.955053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.955096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.955112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:51.959615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.959669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.959685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:51.964137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.964193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.964208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:51.968710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.968766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.968783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:51.973285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.973340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.973356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:51.978000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.978056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.978072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:51.982579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.982634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.982650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:51.987171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.987229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.987246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:51.991760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.991816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.991832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:51.996195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:51.996250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:51.996266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:52.000783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:52.000839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:52.000870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:52.005366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:52.005421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:52.005436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:52.010023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:52.010078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:52.010093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:52.014596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:52.014652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:52.014668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:52.019126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:52.019169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:52.019186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.096 [2024-09-28 01:37:52.024035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.096 [2024-09-28 01:37:52.024108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.096 [2024-09-28 01:37:52.024126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.029265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.029321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.029337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.034149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.034204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.034220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.038750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.038804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.038821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.043395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.043449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.043477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.047993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.048048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.048064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.052611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.052666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.052683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.057171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.057226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.057242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.061757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.061812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.061828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.066380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.066436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.066451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.070827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.070882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.070898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.075425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.075489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.075506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.079972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.080028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.080043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.084718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.084774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.084790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.089393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.089449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.089478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.093987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.094043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.094058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.098521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.098576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.098591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.103064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.103107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.103123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.107669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.107726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.107743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.112202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.112257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.112273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.116810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.116865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.116881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.121438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.121504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.121520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.125999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.126055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.126072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.130551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.130605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.130620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.135034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.135109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.135129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.139735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.139791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.139807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.144298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.356 [2024-09-28 01:37:52.144353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.356 [2024-09-28 01:37:52.144369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.356 [2024-09-28 01:37:52.148867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.148922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.148938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.153384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.153439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.153454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.157998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.158053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.158070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.162513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.162568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.162583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.167046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.167089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.167107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.171594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.171650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.171666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.176222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.176276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.176293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.180929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.180987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.181019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.185690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.185741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.185759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.190306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.190361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.190378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.194811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.194866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.194882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.199321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.199393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.199410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.203996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.204052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.204069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.208725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.208782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.208798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.213222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.213277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.213293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.217788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.217842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.217858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.222337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.222392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.222408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.226854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.226909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.226926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.231377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.231430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.231445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.235963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.236018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.236034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.240535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.240589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.240605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.245045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.245101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.245116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.249624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.249679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.249695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.254131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.254186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.254203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.258698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.258753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.258769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.263099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.263156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.263173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.267746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.357 [2024-09-28 01:37:52.267802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-09-28 01:37:52.267820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.357 [2024-09-28 01:37:52.272383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.358 [2024-09-28 01:37:52.272438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.358 [2024-09-28 01:37:52.272454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.358 [2024-09-28 01:37:52.276928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.358 [2024-09-28 01:37:52.276983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.358 [2024-09-28 01:37:52.276999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.358 [2024-09-28 01:37:52.281620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.358 [2024-09-28 01:37:52.281674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.358 [2024-09-28 01:37:52.281690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.618 [2024-09-28 01:37:52.286685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.618 [2024-09-28 01:37:52.286742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.618 [2024-09-28 01:37:52.286761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.618 [2024-09-28 01:37:52.291686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.618 [2024-09-28 01:37:52.291758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.618 [2024-09-28 01:37:52.291776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.618 [2024-09-28 01:37:52.296658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.618 [2024-09-28 01:37:52.296713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.618 [2024-09-28 01:37:52.296729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.618 [2024-09-28 01:37:52.301266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.618 [2024-09-28 01:37:52.301320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.618 [2024-09-28 01:37:52.301337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.618 [2024-09-28 01:37:52.305979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.618 [2024-09-28 01:37:52.306034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.618 [2024-09-28 01:37:52.306050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.618 [2024-09-28 01:37:52.310611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.618 [2024-09-28 01:37:52.310666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.618 [2024-09-28 01:37:52.310683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.618 [2024-09-28 01:37:52.315179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.618 [2024-09-28 01:37:52.315224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.618 [2024-09-28 01:37:52.315241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.618 [2024-09-28 01:37:52.319806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.618 [2024-09-28 01:37:52.319877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.618 [2024-09-28 01:37:52.319893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.618 [2024-09-28 01:37:52.324493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.618 [2024-09-28 01:37:52.324547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.618 [2024-09-28 01:37:52.324563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.618 [2024-09-28 01:37:52.329031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.618 [2024-09-28 01:37:52.329086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.618 [2024-09-28 01:37:52.329103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.618 [2024-09-28 01:37:52.333675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.618 [2024-09-28 01:37:52.333729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.618 [2024-09-28 01:37:52.333744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.618 [2024-09-28 01:37:52.338145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.618 [2024-09-28 01:37:52.338200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.618 [2024-09-28 01:37:52.338216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.618 [2024-09-28 01:37:52.342844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.618 [2024-09-28 01:37:52.342899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.618 [2024-09-28 01:37:52.342916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.618 [2024-09-28 01:37:52.347468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.347534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.347551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.352006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.352061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.352077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.356621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.356675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.356691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.361156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.361212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.361228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.365844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.365898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.365914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.370370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.370425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.370440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.374785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.374841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.374857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.379439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.379534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.379551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.384004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.384060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.384075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.388659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.388713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.388729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.393188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.393244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.393260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.397851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.397906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.397923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.402388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.402442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.402468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.406883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.406938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.406954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.411549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.411617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.411633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.416231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.416288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.416304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.420990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.421047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.421063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.425689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.425745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.425761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.430251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.619 [2024-09-28 01:37:52.430307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.619 [2024-09-28 01:37:52.430323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.619 [2024-09-28 01:37:52.434791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.434845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.434861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.439409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.439485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.439504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.443982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.444036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.444052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.448564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.448619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.448635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.453199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.453254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.453270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.457764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.457821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.457836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.462306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.462361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.462377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.466846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.466901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.466919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.471489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.471556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.471573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.476032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.476086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.476103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.480686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.480743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.480759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.485340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.485395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.485411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.489977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.490033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.490049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.494535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.494591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.494606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.499024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.499083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.499101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.503603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.503659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.503675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.508222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.508277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.508293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.512830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.512885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.512901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.517463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.517546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.620 [2024-09-28 01:37:52.517563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.620 [2024-09-28 01:37:52.522628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.620 [2024-09-28 01:37:52.522684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.621 [2024-09-28 01:37:52.522700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.621 [2024-09-28 01:37:52.527558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.621 [2024-09-28 01:37:52.527614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.621 [2024-09-28 01:37:52.527630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.621 [2024-09-28 01:37:52.532581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.621 [2024-09-28 01:37:52.532639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.621 [2024-09-28 01:37:52.532656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.621 [2024-09-28 01:37:52.537966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.621 [2024-09-28 01:37:52.538025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.621 [2024-09-28 01:37:52.538042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.621 [2024-09-28 01:37:52.543446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.621 [2024-09-28 01:37:52.543539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.621 [2024-09-28 01:37:52.543558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.881 [2024-09-28 01:37:52.549084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.881 [2024-09-28 01:37:52.549143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.881 [2024-09-28 01:37:52.549175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.881 [2024-09-28 01:37:52.554425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.881 [2024-09-28 01:37:52.554511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.881 [2024-09-28 01:37:52.554530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.881 [2024-09-28 01:37:52.560028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.881 [2024-09-28 01:37:52.560084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.881 [2024-09-28 01:37:52.560101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.881 [2024-09-28 01:37:52.564973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.881 [2024-09-28 01:37:52.565030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.881 [2024-09-28 01:37:52.565047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.881 [2024-09-28 01:37:52.569887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.881 [2024-09-28 01:37:52.569945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.881 [2024-09-28 01:37:52.569961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.881 [2024-09-28 01:37:52.574766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.881 [2024-09-28 01:37:52.574823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.881 [2024-09-28 01:37:52.574856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.881 [2024-09-28 01:37:52.579637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.881 [2024-09-28 01:37:52.579693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.881 [2024-09-28 01:37:52.579709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.881 [2024-09-28 01:37:52.584341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.881 [2024-09-28 01:37:52.584398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.881 [2024-09-28 01:37:52.584416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.881 [2024-09-28 01:37:52.589122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.881 [2024-09-28 01:37:52.589178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.881 [2024-09-28 01:37:52.589194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.881 [2024-09-28 01:37:52.593984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.881 [2024-09-28 01:37:52.594040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.881 [2024-09-28 01:37:52.594056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.881 [2024-09-28 01:37:52.598555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.881 [2024-09-28 01:37:52.598612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.881 [2024-09-28 01:37:52.598628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.881 [2024-09-28 01:37:52.603269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.881 [2024-09-28 01:37:52.603342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.881 [2024-09-28 01:37:52.603359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.881 [2024-09-28 01:37:52.608054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.881 [2024-09-28 01:37:52.608112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.881 [2024-09-28 01:37:52.608129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.881 [2024-09-28 01:37:52.613041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.881 [2024-09-28 01:37:52.613098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.881 [2024-09-28 01:37:52.613114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.881 [2024-09-28 01:37:52.617800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.881 [2024-09-28 01:37:52.617857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.881 [2024-09-28 01:37:52.617873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.881 [2024-09-28 01:37:52.622495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:56.881 [2024-09-28 01:37:52.622551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.881 [2024-09-28 01:37:52.622568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.881 6556.50 IOPS, 819.56 MiB/s 00:24:56.881 Latency(us) 00:24:56.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.881 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:56.881 nvme0n1 : 2.00 6557.46 819.68 0.00 0.00 2436.57 1995.87 6225.92 00:24:56.881 =================================================================================================================== 00:24:56.881 Total : 6557.46 819.68 0.00 0.00 2436.57 1995.87 6225.92 00:24:56.881 { 00:24:56.881 "results": [ 00:24:56.881 { 00:24:56.881 "job": "nvme0n1", 00:24:56.881 "core_mask": "0x2", 00:24:56.881 "workload": "randread", 00:24:56.881 "status": "finished", 00:24:56.881 "queue_depth": 16, 00:24:56.881 "io_size": 131072, 00:24:56.881 "runtime": 2.002148, 00:24:56.881 "iops": 6557.457290869606, 00:24:56.881 "mibps": 819.6821613587008, 00:24:56.881 "io_failed": 0, 00:24:56.881 "io_timeout": 0, 00:24:56.881 "avg_latency_us": 2436.574174866188, 00:24:56.881 "min_latency_us": 1995.8690909090908, 00:24:56.881 "max_latency_us": 6225.92 00:24:56.881 } 00:24:56.881 ], 00:24:56.881 "core_count": 1 00:24:56.881 } 00:24:56.881 01:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:56.881 01:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:56.881 01:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:56.881 01:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:56.881 | .driver_specific 00:24:56.881 | .nvme_error 00:24:56.881 | .status_code 00:24:56.881 | .command_transient_transport_error' 00:24:57.142 01:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 423 > 0 )) 00:24:57.142 01:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86432 00:24:57.142 01:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 86432 ']' 00:24:57.142 01:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 86432 00:24:57.142 01:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:24:57.142 01:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:57.142 01:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86432 00:24:57.142 killing process with pid 86432 00:24:57.142 Received shutdown signal, test time was about 2.000000 seconds 00:24:57.142 00:24:57.142 Latency(us) 00:24:57.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.142 =================================================================================================================== 00:24:57.142 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.142 01:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:57.142 01:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:57.142 01:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86432' 00:24:57.142 01:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 86432 00:24:57.142 01:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 86432 00:24:58.112 01:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:58.112 01:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:58.112 01:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:58.112 01:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:58.112 01:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:58.112 01:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86493 00:24:58.112 01:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:58.112 01:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86493 /var/tmp/bperf.sock 00:24:58.112 01:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 86493 ']' 00:24:58.112 01:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:58.112 01:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:58.112 01:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:58.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:58.112 01:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:58.112 01:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:58.112 [2024-09-28 01:37:53.976154] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:58.112 [2024-09-28 01:37:53.976287] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86493 ] 00:24:58.371 [2024-09-28 01:37:54.133810] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.371 [2024-09-28 01:37:54.283440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.630 [2024-09-28 01:37:54.429262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:59.196 01:37:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:59.196 01:37:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:24:59.196 01:37:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:59.196 01:37:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:59.196 01:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:59.196 01:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.196 01:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:59.196 01:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.196 01:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.196 01:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.762 nvme0n1 00:24:59.762 01:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:59.762 01:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.762 01:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:59.762 01:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.762 01:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:59.762 01:37:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:59.762 Running I/O for 2 seconds... 00:24:59.762 [2024-09-28 01:37:55.575014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfef90 00:24:59.762 [2024-09-28 01:37:55.577808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.762 [2024-09-28 01:37:55.577906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.762 [2024-09-28 01:37:55.592202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfeb58 00:24:59.762 [2024-09-28 01:37:55.594935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.762 [2024-09-28 01:37:55.595011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:59.762 [2024-09-28 01:37:55.608990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe2e8 00:24:59.762 [2024-09-28 01:37:55.611760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.763 [2024-09-28 01:37:55.611813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:59.763 [2024-09-28 01:37:55.625517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:24:59.763 [2024-09-28 01:37:55.628199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.763 [2024-09-28 01:37:55.628261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:59.763 [2024-09-28 01:37:55.641965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfd208 00:24:59.763 [2024-09-28 01:37:55.644557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.763 [2024-09-28 01:37:55.644616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:59.763 [2024-09-28 01:37:55.658190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfc998 00:24:59.763 [2024-09-28 01:37:55.660789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.763 [2024-09-28 01:37:55.660841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:59.763 [2024-09-28 01:37:55.674435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfc128 00:24:59.763 [2024-09-28 01:37:55.676982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.763 [2024-09-28 01:37:55.677035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:59.763 [2024-09-28 01:37:55.690864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfb8b8 00:24:59.763 [2024-09-28 01:37:55.693935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.763 [2024-09-28 01:37:55.693989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:00.031 [2024-09-28 01:37:55.708663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfb048 00:25:00.031 [2024-09-28 01:37:55.711204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.031 [2024-09-28 01:37:55.711258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:00.031 [2024-09-28 01:37:55.725249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfa7d8 00:25:00.031 [2024-09-28 01:37:55.727853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.031 [2024-09-28 01:37:55.727910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:00.031 [2024-09-28 01:37:55.741745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df9f68 00:25:00.031 [2024-09-28 01:37:55.744275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.031 [2024-09-28 01:37:55.744328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:00.031 [2024-09-28 01:37:55.758003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df96f8 00:25:00.031 [2024-09-28 01:37:55.760463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.031 [2024-09-28 01:37:55.760516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:00.031 [2024-09-28 01:37:55.774351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df8e88 00:25:00.031 [2024-09-28 01:37:55.776739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.031 [2024-09-28 01:37:55.776802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:00.031 [2024-09-28 01:37:55.790521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df8618 00:25:00.031 [2024-09-28 01:37:55.792919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.031 [2024-09-28 01:37:55.792977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:00.031 [2024-09-28 01:37:55.806666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df7da8 00:25:00.031 [2024-09-28 01:37:55.809190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.031 [2024-09-28 01:37:55.809243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:00.031 [2024-09-28 01:37:55.823101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df7538 00:25:00.031 [2024-09-28 01:37:55.825542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.031 [2024-09-28 01:37:55.825587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:00.031 [2024-09-28 01:37:55.839563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df6cc8 00:25:00.031 [2024-09-28 01:37:55.841858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.031 [2024-09-28 01:37:55.841918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:00.031 [2024-09-28 01:37:55.856187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df6458 00:25:00.031 [2024-09-28 01:37:55.858571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.031 [2024-09-28 01:37:55.858628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:00.031 [2024-09-28 01:37:55.872486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df5be8 00:25:00.031 [2024-09-28 01:37:55.874786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.031 [2024-09-28 01:37:55.874846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:00.031 [2024-09-28 01:37:55.888737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df5378 00:25:00.031 [2024-09-28 01:37:55.891068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.031 [2024-09-28 01:37:55.891123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:00.031 [2024-09-28 01:37:55.904939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df4b08 00:25:00.031 [2024-09-28 01:37:55.907262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.031 [2024-09-28 01:37:55.907332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:00.031 [2024-09-28 01:37:55.921244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df4298 00:25:00.031 [2024-09-28 01:37:55.923572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.031 [2024-09-28 01:37:55.923635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:00.031 [2024-09-28 01:37:55.937608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df3a28 00:25:00.031 [2024-09-28 01:37:55.939898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.031 [2024-09-28 01:37:55.939957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:00.031 [2024-09-28 01:37:55.954235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df31b8 00:25:00.031 [2024-09-28 01:37:55.956564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.031 [2024-09-28 01:37:55.956622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:00.292 [2024-09-28 01:37:55.972619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df2948 00:25:00.292 [2024-09-28 01:37:55.975124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.292 [2024-09-28 01:37:55.975183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:00.292 [2024-09-28 01:37:55.992428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df20d8 00:25:00.292 [2024-09-28 01:37:55.994948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.292 [2024-09-28 01:37:55.995023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:00.292 [2024-09-28 01:37:56.010526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df1868 00:25:00.292 [2024-09-28 01:37:56.012729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.292 [2024-09-28 01:37:56.012769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:00.292 [2024-09-28 01:37:56.027498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df0ff8 00:25:00.292 [2024-09-28 01:37:56.029637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.292 [2024-09-28 01:37:56.029690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:00.292 [2024-09-28 01:37:56.044327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df0788 00:25:00.292 [2024-09-28 01:37:56.046528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.292 [2024-09-28 01:37:56.046588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:00.292 [2024-09-28 01:37:56.060778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019deff18 00:25:00.292 [2024-09-28 01:37:56.062863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.292 [2024-09-28 01:37:56.062921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:00.292 [2024-09-28 01:37:56.077308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019def6a8 00:25:00.292 [2024-09-28 01:37:56.079593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.292 [2024-09-28 01:37:56.079645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:00.292 [2024-09-28 01:37:56.093989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019deee38 00:25:00.292 [2024-09-28 01:37:56.096195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.292 [2024-09-28 01:37:56.096248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:00.292 [2024-09-28 01:37:56.110367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dee5c8 00:25:00.292 [2024-09-28 01:37:56.112445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.292 [2024-09-28 01:37:56.112524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:00.292 [2024-09-28 01:37:56.126613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dedd58 00:25:00.292 [2024-09-28 01:37:56.128628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.292 [2024-09-28 01:37:56.128689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:00.292 [2024-09-28 01:37:56.142868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019ded4e8 00:25:00.292 [2024-09-28 01:37:56.144983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.292 [2024-09-28 01:37:56.145044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:00.292 [2024-09-28 01:37:56.159217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019decc78 00:25:00.292 [2024-09-28 01:37:56.161252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.292 [2024-09-28 01:37:56.161305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:00.292 [2024-09-28 01:37:56.175719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dec408 00:25:00.292 [2024-09-28 01:37:56.177671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.292 [2024-09-28 01:37:56.177724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:00.292 [2024-09-28 01:37:56.192066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019debb98 00:25:00.292 [2024-09-28 01:37:56.194038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.292 [2024-09-28 01:37:56.194090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:00.292 [2024-09-28 01:37:56.208340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019deb328 00:25:00.292 [2024-09-28 01:37:56.210270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.292 [2024-09-28 01:37:56.210331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:00.551 [2024-09-28 01:37:56.225519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019deaab8 00:25:00.551 [2024-09-28 01:37:56.227824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.551 [2024-09-28 01:37:56.228063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:00.551 [2024-09-28 01:37:56.242648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dea248 00:25:00.551 [2024-09-28 01:37:56.244619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.551 [2024-09-28 01:37:56.244680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:00.551 [2024-09-28 01:37:56.260193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de99d8 00:25:00.551 [2024-09-28 01:37:56.262283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.551 [2024-09-28 01:37:56.262342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:00.551 [2024-09-28 01:37:56.279073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de9168 00:25:00.551 [2024-09-28 01:37:56.281295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.551 [2024-09-28 01:37:56.281339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:00.551 [2024-09-28 01:37:56.297563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de88f8 00:25:00.551 [2024-09-28 01:37:56.299731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.551 [2024-09-28 01:37:56.299943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:00.551 [2024-09-28 01:37:56.315459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de8088 00:25:00.551 [2024-09-28 01:37:56.317386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.551 [2024-09-28 01:37:56.317445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:00.551 [2024-09-28 01:37:56.332828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de7818 00:25:00.551 [2024-09-28 01:37:56.334697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.551 [2024-09-28 01:37:56.334764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:00.551 [2024-09-28 01:37:56.350582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de6fa8 00:25:00.551 [2024-09-28 01:37:56.352717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.551 [2024-09-28 01:37:56.352782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:00.551 [2024-09-28 01:37:56.368203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de6738 00:25:00.551 [2024-09-28 01:37:56.370068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.551 [2024-09-28 01:37:56.370134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:00.551 [2024-09-28 01:37:56.385485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de5ec8 00:25:00.551 [2024-09-28 01:37:56.387443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.551 [2024-09-28 01:37:56.387524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:00.551 [2024-09-28 01:37:56.402693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de5658 00:25:00.551 [2024-09-28 01:37:56.404794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.552 [2024-09-28 01:37:56.404852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:00.552 [2024-09-28 01:37:56.420373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de4de8 00:25:00.552 [2024-09-28 01:37:56.422182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.552 [2024-09-28 01:37:56.422226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:00.552 [2024-09-28 01:37:56.437684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de4578 00:25:00.552 [2024-09-28 01:37:56.439545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.552 [2024-09-28 01:37:56.439613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:00.552 [2024-09-28 01:37:56.455224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de3d08 00:25:00.552 [2024-09-28 01:37:56.457049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.552 [2024-09-28 01:37:56.457108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:00.552 [2024-09-28 01:37:56.472679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de3498 00:25:00.552 [2024-09-28 01:37:56.474355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.552 [2024-09-28 01:37:56.474419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:00.811 [2024-09-28 01:37:56.491345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de2c28 00:25:00.811 [2024-09-28 01:37:56.493080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.811 [2024-09-28 01:37:56.493141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:00.811 [2024-09-28 01:37:56.508560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de23b8 00:25:00.811 [2024-09-28 01:37:56.510175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.811 [2024-09-28 01:37:56.510234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:00.811 [2024-09-28 01:37:56.524968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de1b48 00:25:00.811 [2024-09-28 01:37:56.526663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.811 [2024-09-28 01:37:56.526716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:00.811 [2024-09-28 01:37:56.541651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de12d8 00:25:00.811 [2024-09-28 01:37:56.543266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.811 [2024-09-28 01:37:56.543529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.811 14802.00 IOPS, 57.82 MiB/s [2024-09-28 01:37:56.558454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de0a68 00:25:00.811 [2024-09-28 01:37:56.560062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.811 [2024-09-28 01:37:56.560126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:00.811 [2024-09-28 01:37:56.574761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de01f8 00:25:00.811 [2024-09-28 01:37:56.576303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.811 [2024-09-28 01:37:56.576366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:00.811 [2024-09-28 01:37:56.591211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019ddf988 00:25:00.811 [2024-09-28 01:37:56.592994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.811 [2024-09-28 01:37:56.593035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:00.811 [2024-09-28 01:37:56.608517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019ddf118 00:25:00.811 [2024-09-28 01:37:56.610016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.811 [2024-09-28 01:37:56.610057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:00.811 [2024-09-28 01:37:56.624886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dde8a8 00:25:00.811 [2024-09-28 01:37:56.626512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.811 [2024-09-28 01:37:56.626571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:00.811 [2024-09-28 01:37:56.641249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dde038 00:25:00.811 [2024-09-28 01:37:56.642793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.811 [2024-09-28 01:37:56.642856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:00.811 [2024-09-28 01:37:56.664218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dde038 00:25:00.811 [2024-09-28 01:37:56.666905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.811 [2024-09-28 01:37:56.667121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.811 [2024-09-28 01:37:56.680700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dde8a8 00:25:00.811 [2024-09-28 01:37:56.683722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.811 [2024-09-28 01:37:56.683786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:00.811 [2024-09-28 01:37:56.697331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019ddf118 00:25:00.811 [2024-09-28 01:37:56.700102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.811 [2024-09-28 01:37:56.700158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.811 [2024-09-28 01:37:56.714004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019ddf988 00:25:00.811 [2024-09-28 01:37:56.716727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.811 [2024-09-28 01:37:56.716784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:00.811 [2024-09-28 01:37:56.730453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de01f8 00:25:00.811 [2024-09-28 01:37:56.733105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.811 [2024-09-28 01:37:56.733164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:01.071 [2024-09-28 01:37:56.748242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de0a68 00:25:01.071 [2024-09-28 01:37:56.750822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-09-28 01:37:56.750888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:01.071 [2024-09-28 01:37:56.764671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de12d8 00:25:01.071 [2024-09-28 01:37:56.767227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-09-28 01:37:56.767449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:01.071 [2024-09-28 01:37:56.781311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de1b48 00:25:01.071 [2024-09-28 01:37:56.784106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-09-28 01:37:56.784157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:01.071 [2024-09-28 01:37:56.797810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de23b8 00:25:01.071 [2024-09-28 01:37:56.800348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.071 [2024-09-28 01:37:56.800389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:01.071 [2024-09-28 01:37:56.814118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de2c28 00:25:01.071 [2024-09-28 01:37:56.816676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.072 [2024-09-28 01:37:56.816745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:01.072 [2024-09-28 01:37:56.830599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de3498 00:25:01.072 [2024-09-28 01:37:56.833528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.072 [2024-09-28 01:37:56.833589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:01.072 [2024-09-28 01:37:56.847524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de3d08 00:25:01.072 [2024-09-28 01:37:56.849924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.072 [2024-09-28 01:37:56.849985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:01.072 [2024-09-28 01:37:56.863782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de4578 00:25:01.072 [2024-09-28 01:37:56.866222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.072 [2024-09-28 01:37:56.866263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:01.072 [2024-09-28 01:37:56.880344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de4de8 00:25:01.072 [2024-09-28 01:37:56.882793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.072 [2024-09-28 01:37:56.883021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:01.072 [2024-09-28 01:37:56.897013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de5658 00:25:01.072 [2024-09-28 01:37:56.899479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.072 [2024-09-28 01:37:56.899572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:01.072 [2024-09-28 01:37:56.913353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de5ec8 00:25:01.072 [2024-09-28 01:37:56.915813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.072 [2024-09-28 01:37:56.915875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.072 [2024-09-28 01:37:56.929599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de6738 00:25:01.072 [2024-09-28 01:37:56.932226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.072 [2024-09-28 01:37:56.932289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:01.072 [2024-09-28 01:37:56.946225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de6fa8 00:25:01.072 [2024-09-28 01:37:56.948678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.072 [2024-09-28 01:37:56.948736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:01.072 [2024-09-28 01:37:56.962642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de7818 00:25:01.072 [2024-09-28 01:37:56.964979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.072 [2024-09-28 01:37:56.965019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:01.072 [2024-09-28 01:37:56.979165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de8088 00:25:01.072 [2024-09-28 01:37:56.981527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.072 [2024-09-28 01:37:56.981730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:01.072 [2024-09-28 01:37:56.997581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de88f8 00:25:01.072 [2024-09-28 01:37:57.000521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.072 [2024-09-28 01:37:57.000745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:01.331 [2024-09-28 01:37:57.018051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de9168 00:25:01.331 [2024-09-28 01:37:57.020893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.331 [2024-09-28 01:37:57.020950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:01.331 [2024-09-28 01:37:57.036462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de99d8 00:25:01.331 [2024-09-28 01:37:57.038734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.331 [2024-09-28 01:37:57.038947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:01.331 [2024-09-28 01:37:57.053206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dea248 00:25:01.331 [2024-09-28 01:37:57.055554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.331 [2024-09-28 01:37:57.055614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:01.331 [2024-09-28 01:37:57.069573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019deaab8 00:25:01.331 [2024-09-28 01:37:57.071875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.331 [2024-09-28 01:37:57.071932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:01.331 [2024-09-28 01:37:57.086309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019deb328 00:25:01.331 [2024-09-28 01:37:57.088584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.331 [2024-09-28 01:37:57.088641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:01.331 [2024-09-28 01:37:57.102594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019debb98 00:25:01.331 [2024-09-28 01:37:57.105040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.331 [2024-09-28 01:37:57.105103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:01.331 [2024-09-28 01:37:57.119237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dec408 00:25:01.331 [2024-09-28 01:37:57.121434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.331 [2024-09-28 01:37:57.121521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:01.331 [2024-09-28 01:37:57.135682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019decc78 00:25:01.331 [2024-09-28 01:37:57.137788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.331 [2024-09-28 01:37:57.137853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:01.331 [2024-09-28 01:37:57.151960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019ded4e8 00:25:01.331 [2024-09-28 01:37:57.154088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.331 [2024-09-28 01:37:57.154129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:01.331 [2024-09-28 01:37:57.168172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dedd58 00:25:01.332 [2024-09-28 01:37:57.170297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.332 [2024-09-28 01:37:57.170338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:01.332 [2024-09-28 01:37:57.184779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dee5c8 00:25:01.332 [2024-09-28 01:37:57.186803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.332 [2024-09-28 01:37:57.186866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.332 [2024-09-28 01:37:57.201145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019deee38 00:25:01.332 [2024-09-28 01:37:57.203274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.332 [2024-09-28 01:37:57.203353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:01.332 [2024-09-28 01:37:57.217732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019def6a8 00:25:01.332 [2024-09-28 01:37:57.219854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.332 [2024-09-28 01:37:57.219916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:01.332 [2024-09-28 01:37:57.234040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019deff18 00:25:01.332 [2024-09-28 01:37:57.236095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.332 [2024-09-28 01:37:57.236152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:01.332 [2024-09-28 01:37:57.250348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df0788 00:25:01.332 [2024-09-28 01:37:57.252390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.332 [2024-09-28 01:37:57.252431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:01.591 [2024-09-28 01:37:57.267837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df0ff8 00:25:01.591 [2024-09-28 01:37:57.269801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.591 [2024-09-28 01:37:57.269868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:01.591 [2024-09-28 01:37:57.284384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df1868 00:25:01.591 [2024-09-28 01:37:57.286379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.591 [2024-09-28 01:37:57.286441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:01.591 [2024-09-28 01:37:57.300710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df20d8 00:25:01.591 [2024-09-28 01:37:57.302603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.591 [2024-09-28 01:37:57.302665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:01.591 [2024-09-28 01:37:57.316986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df2948 00:25:01.591 [2024-09-28 01:37:57.319015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.591 [2024-09-28 01:37:57.319074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:01.591 [2024-09-28 01:37:57.333733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df31b8 00:25:01.591 [2024-09-28 01:37:57.335718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.591 [2024-09-28 01:37:57.335777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:01.591 [2024-09-28 01:37:57.350000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df3a28 00:25:01.591 [2024-09-28 01:37:57.351984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.591 [2024-09-28 01:37:57.352047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:01.591 [2024-09-28 01:37:57.366279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df4298 00:25:01.591 [2024-09-28 01:37:57.368191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.591 [2024-09-28 01:37:57.368253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:01.591 [2024-09-28 01:37:57.382589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df4b08 00:25:01.591 [2024-09-28 01:37:57.384722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.591 [2024-09-28 01:37:57.384783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:01.591 [2024-09-28 01:37:57.399250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df5378 00:25:01.591 [2024-09-28 01:37:57.401179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.591 [2024-09-28 01:37:57.401220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:01.591 [2024-09-28 01:37:57.415894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df5be8 00:25:01.591 [2024-09-28 01:37:57.417673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.591 [2024-09-28 01:37:57.417730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:01.591 [2024-09-28 01:37:57.432120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df6458 00:25:01.591 [2024-09-28 01:37:57.433946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.591 [2024-09-28 01:37:57.434014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:01.591 [2024-09-28 01:37:57.448507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df6cc8 00:25:01.591 [2024-09-28 01:37:57.450358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.591 [2024-09-28 01:37:57.450420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.591 [2024-09-28 01:37:57.464818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df7538 00:25:01.591 [2024-09-28 01:37:57.466778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.591 [2024-09-28 01:37:57.466840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:01.591 [2024-09-28 01:37:57.481380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df7da8 00:25:01.591 [2024-09-28 01:37:57.483196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.592 [2024-09-28 01:37:57.483256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:01.592 [2024-09-28 01:37:57.498834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df8618 00:25:01.592 [2024-09-28 01:37:57.500796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.592 [2024-09-28 01:37:57.500872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:01.592 [2024-09-28 01:37:57.517721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df8e88 00:25:01.592 [2024-09-28 01:37:57.519950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.592 [2024-09-28 01:37:57.520011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:01.851 [2024-09-28 01:37:57.537630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df96f8 00:25:01.851 [2024-09-28 01:37:57.539506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.851 [2024-09-28 01:37:57.539577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:01.851 14991.50 IOPS, 58.56 MiB/s [2024-09-28 01:37:57.556938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df9f68 00:25:01.851 [2024-09-28 01:37:57.558629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.851 [2024-09-28 01:37:57.558671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:01.851 00:25:01.851 Latency(us) 00:25:01.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.851 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:01.851 nvme0n1 : 2.01 14977.91 58.51 0.00 0.00 8537.31 6494.02 32172.22 00:25:01.851 =================================================================================================================== 00:25:01.851 Total : 14977.91 58.51 0.00 0.00 8537.31 6494.02 32172.22 00:25:01.851 { 00:25:01.851 "results": [ 00:25:01.851 { 00:25:01.851 "job": "nvme0n1", 00:25:01.851 "core_mask": "0x2", 00:25:01.851 "workload": "randwrite", 00:25:01.851 "status": "finished", 00:25:01.851 "queue_depth": 128, 00:25:01.851 "io_size": 4096, 00:25:01.851 "runtime": 2.010361, 00:25:01.851 "iops": 14977.90695302983, 00:25:01.851 "mibps": 58.50744903527277, 00:25:01.851 "io_failed": 0, 00:25:01.851 "io_timeout": 0, 00:25:01.851 "avg_latency_us": 8537.31493208462, 00:25:01.851 "min_latency_us": 6494.021818181818, 00:25:01.851 "max_latency_us": 32172.21818181818 00:25:01.851 } 00:25:01.851 ], 00:25:01.851 "core_count": 1 00:25:01.851 } 00:25:01.851 01:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:01.851 01:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:01.851 01:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:01.851 | .driver_specific 00:25:01.851 | .nvme_error 00:25:01.851 | .status_code 00:25:01.851 | .command_transient_transport_error' 00:25:01.851 01:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:02.110 01:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 118 > 0 )) 00:25:02.110 01:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86493 00:25:02.110 01:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 86493 ']' 00:25:02.110 01:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 86493 00:25:02.110 01:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:25:02.110 01:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:02.110 01:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86493 00:25:02.110 killing process with pid 86493 00:25:02.111 Received shutdown signal, test time was about 2.000000 seconds 00:25:02.111 00:25:02.111 Latency(us) 00:25:02.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.111 =================================================================================================================== 00:25:02.111 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:02.111 01:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:02.111 01:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:02.111 01:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86493' 00:25:02.111 01:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 86493 00:25:02.111 01:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 86493 00:25:03.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:03.048 01:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:03.048 01:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:03.048 01:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:03.048 01:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:03.048 01:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:03.048 01:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86557 00:25:03.049 01:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86557 /var/tmp/bperf.sock 00:25:03.049 01:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 86557 ']' 00:25:03.049 01:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:03.049 01:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:03.049 01:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:03.049 01:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:03.049 01:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:03.049 01:37:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:03.049 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:03.049 Zero copy mechanism will not be used. 00:25:03.049 [2024-09-28 01:37:58.827762] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:03.049 [2024-09-28 01:37:58.827898] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86557 ] 00:25:03.308 [2024-09-28 01:37:58.984669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.308 [2024-09-28 01:37:59.142245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.566 [2024-09-28 01:37:59.291059] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:03.823 01:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:03.823 01:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:25:03.823 01:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:03.824 01:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:04.082 01:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:04.082 01:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.082 01:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:04.082 01:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.082 01:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:04.082 01:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:04.650 nvme0n1 00:25:04.650 01:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:04.650 01:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.650 01:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:04.650 01:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.650 01:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:04.650 01:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:04.650 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:04.650 Zero copy mechanism will not be used. 00:25:04.650 Running I/O for 2 seconds... 00:25:04.650 [2024-09-28 01:38:00.450858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.650 [2024-09-28 01:38:00.451264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.650 [2024-09-28 01:38:00.451324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.650 [2024-09-28 01:38:00.457129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.650 [2024-09-28 01:38:00.457681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.650 [2024-09-28 01:38:00.457723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.650 [2024-09-28 01:38:00.463462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.650 [2024-09-28 01:38:00.463827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.650 [2024-09-28 01:38:00.463864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.650 [2024-09-28 01:38:00.469502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.650 [2024-09-28 01:38:00.469862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.650 [2024-09-28 01:38:00.469912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.650 [2024-09-28 01:38:00.475620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.650 [2024-09-28 01:38:00.475946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.650 [2024-09-28 01:38:00.475981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.650 [2024-09-28 01:38:00.481537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.650 [2024-09-28 01:38:00.481887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.650 [2024-09-28 01:38:00.481923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.650 [2024-09-28 01:38:00.487656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.650 [2024-09-28 01:38:00.487993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.650 [2024-09-28 01:38:00.488036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.650 [2024-09-28 01:38:00.493650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.650 [2024-09-28 01:38:00.493987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.650 [2024-09-28 01:38:00.494030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.650 [2024-09-28 01:38:00.499998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.650 [2024-09-28 01:38:00.500314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.650 [2024-09-28 01:38:00.500349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.650 [2024-09-28 01:38:00.505862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.650 [2024-09-28 01:38:00.506191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.650 [2024-09-28 01:38:00.506234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.650 [2024-09-28 01:38:00.511883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.650 [2024-09-28 01:38:00.512191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.650 [2024-09-28 01:38:00.512233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.650 [2024-09-28 01:38:00.517776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.650 [2024-09-28 01:38:00.518093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.650 [2024-09-28 01:38:00.518129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.650 [2024-09-28 01:38:00.523684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.650 [2024-09-28 01:38:00.524019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.650 [2024-09-28 01:38:00.524053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.650 [2024-09-28 01:38:00.529418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.650 [2024-09-28 01:38:00.529937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.650 [2024-09-28 01:38:00.529998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.650 [2024-09-28 01:38:00.535737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.650 [2024-09-28 01:38:00.536091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.650 [2024-09-28 01:38:00.536124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.650 [2024-09-28 01:38:00.541563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.650 [2024-09-28 01:38:00.541881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.650 [2024-09-28 01:38:00.541915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.650 [2024-09-28 01:38:00.547552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.650 [2024-09-28 01:38:00.547885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.650 [2024-09-28 01:38:00.547926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.651 [2024-09-28 01:38:00.553346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.651 [2024-09-28 01:38:00.553852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.651 [2024-09-28 01:38:00.553916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.651 [2024-09-28 01:38:00.559423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.651 [2024-09-28 01:38:00.559824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.651 [2024-09-28 01:38:00.559865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.651 [2024-09-28 01:38:00.565363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.651 [2024-09-28 01:38:00.565905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.651 [2024-09-28 01:38:00.565967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.651 [2024-09-28 01:38:00.571670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.651 [2024-09-28 01:38:00.572009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.651 [2024-09-28 01:38:00.572051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.651 [2024-09-28 01:38:00.577680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.651 [2024-09-28 01:38:00.578037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.651 [2024-09-28 01:38:00.578073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.584244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.584609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.584650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.590465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.590772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.590817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.596229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.596594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.596634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.602069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.602391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.602425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.608433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.608842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.608900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.614687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.615059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.615104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.620713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.621039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.621074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.626691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.627054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.627099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.632663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.632997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.633039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.638692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.639044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.639080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.644529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.644844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.644878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.650274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.650653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.650701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.656314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.656820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.656874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.662340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.662709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.662749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.668221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.668757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.668805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.674268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.674624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.674672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.680215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.680752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.680792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.686283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.686642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.686691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.692248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.692741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.692804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.698358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.698729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.698769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.704212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.704747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.704786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.710112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.911 [2024-09-28 01:38:00.710423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.911 [2024-09-28 01:38:00.710475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.911 [2024-09-28 01:38:00.716180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.716661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.716707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.722940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.723382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.723412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.729812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.730111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.730147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.736090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.736607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.736783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.742360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.742937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.743172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.748788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.749329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.749568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.755272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.755809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.755993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.761588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.762106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.762374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.768089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.768611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.768670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.774171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.774516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.774564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.780375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.780914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.780969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.786865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.787220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.787262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.792928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.793246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.793287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.798797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.799171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.799207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.804841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.805185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.805220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.810835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.811182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.811225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.816839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.817192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.817227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.822695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.823057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.823092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.828475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.828829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.828877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.834373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.834742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.834804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.912 [2024-09-28 01:38:00.840716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:04.912 [2024-09-28 01:38:00.841095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.912 [2024-09-28 01:38:00.841129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.847104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.847490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.173 [2024-09-28 01:38:00.847589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.853107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.853415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.173 [2024-09-28 01:38:00.853471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.858876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.859268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.173 [2024-09-28 01:38:00.859320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.864734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.865068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.173 [2024-09-28 01:38:00.865102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.870640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.870949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.173 [2024-09-28 01:38:00.871013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.876483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.876836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.173 [2024-09-28 01:38:00.876884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.882308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.882674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.173 [2024-09-28 01:38:00.882713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.888202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.888695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.173 [2024-09-28 01:38:00.888760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.894208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.894549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.173 [2024-09-28 01:38:00.894591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.899939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.900430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.173 [2024-09-28 01:38:00.900510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.905995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.906311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.173 [2024-09-28 01:38:00.906345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.911746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.912056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.173 [2024-09-28 01:38:00.912097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.917543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.917876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.173 [2024-09-28 01:38:00.917909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.923329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.923858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.173 [2024-09-28 01:38:00.923911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.929330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.929692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.173 [2024-09-28 01:38:00.929770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.935232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.935766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.173 [2024-09-28 01:38:00.935814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.941210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.941557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.173 [2024-09-28 01:38:00.941592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.173 [2024-09-28 01:38:00.947072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.173 [2024-09-28 01:38:00.947583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:00.947628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:00.953072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:00.953379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:00.953420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:00.958997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:00.959482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:00.959554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:00.964961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:00.965279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:00.965313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:00.970744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:00.971113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:00.971157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:00.976495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:00.976844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:00.976892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:00.982239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:00.982777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:00.982817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:00.988271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:00.988647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:00.988694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:00.994146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:00.994681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:00.994729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:01.000242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:01.000606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:01.000646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:01.006054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:01.006602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:01.006642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:01.012128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:01.012437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:01.012507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:01.017939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:01.018433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:01.018497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:01.023910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:01.024226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:01.024260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:01.029795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:01.030108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:01.030149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:01.035617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:01.035927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:01.035970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:01.041377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:01.041927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:01.041967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:01.047678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:01.047986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:01.048024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:01.053525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:01.053833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:01.053875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:01.059295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:01.059697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:01.059737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:01.065158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:01.065681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:01.065722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:01.071229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:01.071617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:01.071667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:01.077710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:01.078065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:01.078099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:01.084390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:01.084820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:01.084931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:01.091694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:01.092080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:01.092173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.174 [2024-09-28 01:38:01.098416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.174 [2024-09-28 01:38:01.098939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.174 [2024-09-28 01:38:01.099019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.435 [2024-09-28 01:38:01.105755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.435 [2024-09-28 01:38:01.106124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.435 [2024-09-28 01:38:01.106158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.435 [2024-09-28 01:38:01.112689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.435 [2024-09-28 01:38:01.113054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.435 [2024-09-28 01:38:01.113098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.119131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.119504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.119548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.124938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.125439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.125504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.130954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.131352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.131410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.136951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.137258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.137300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.142747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.143132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.143168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.148746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.149062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.149107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.154646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.155012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.155055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.160393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.160906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.160960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.166404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.166754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.166789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.172217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.172765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.172813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.178207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.178537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.178571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.184219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.184770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.184811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.190306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.190629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.190671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.195978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.196467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.196540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.201946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.202262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.202296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.207844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.208191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.208225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.213686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.213997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.214038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.219563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.219899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.219932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.225319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.225680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.225716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.231099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.231562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.231623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.237713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.238059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.238103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.244162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.244525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.244560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.250664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.251050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.251095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.257051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.257419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.257497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.263615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.264000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.264034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.270077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.270630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.436 [2024-09-28 01:38:01.270680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.436 [2024-09-28 01:38:01.276607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.436 [2024-09-28 01:38:01.276922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.437 [2024-09-28 01:38:01.276964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.437 [2024-09-28 01:38:01.282600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.437 [2024-09-28 01:38:01.282941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.437 [2024-09-28 01:38:01.282995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.437 [2024-09-28 01:38:01.288662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.437 [2024-09-28 01:38:01.288995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.437 [2024-09-28 01:38:01.289034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.437 [2024-09-28 01:38:01.294758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.437 [2024-09-28 01:38:01.295121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.437 [2024-09-28 01:38:01.295165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.437 [2024-09-28 01:38:01.300709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.437 [2024-09-28 01:38:01.301039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.437 [2024-09-28 01:38:01.301073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.437 [2024-09-28 01:38:01.306653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.437 [2024-09-28 01:38:01.307018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.437 [2024-09-28 01:38:01.307069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.437 [2024-09-28 01:38:01.312624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.437 [2024-09-28 01:38:01.312942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.437 [2024-09-28 01:38:01.312984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.437 [2024-09-28 01:38:01.318894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.437 [2024-09-28 01:38:01.319268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.437 [2024-09-28 01:38:01.319315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.437 [2024-09-28 01:38:01.324872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.437 [2024-09-28 01:38:01.325201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.437 [2024-09-28 01:38:01.325235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.437 [2024-09-28 01:38:01.330855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.437 [2024-09-28 01:38:01.331227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.437 [2024-09-28 01:38:01.331270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.437 [2024-09-28 01:38:01.336864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.437 [2024-09-28 01:38:01.337180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.437 [2024-09-28 01:38:01.337224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.437 [2024-09-28 01:38:01.343083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.437 [2024-09-28 01:38:01.343504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.437 [2024-09-28 01:38:01.343549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.437 [2024-09-28 01:38:01.349050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.437 [2024-09-28 01:38:01.349364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.437 [2024-09-28 01:38:01.349408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.437 [2024-09-28 01:38:01.355017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.437 [2024-09-28 01:38:01.355378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.437 [2024-09-28 01:38:01.355420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.437 [2024-09-28 01:38:01.360963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.437 [2024-09-28 01:38:01.361285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.437 [2024-09-28 01:38:01.361320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.367662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.368145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.368241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.374576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.374938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.375013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.380662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.381029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.381070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.386715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.387130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.387175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.392968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.393342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.393391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.399249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.399689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.399731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.405222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.405614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.405655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.411351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.411767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.411820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.417352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.417736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.417801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.423631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.424012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.424053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.429686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.430078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.430127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.435792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.436168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.436218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.442063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.442449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.442501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.698 5090.00 IOPS, 636.25 MiB/s [2024-09-28 01:38:01.449005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.449376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.449425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.455137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.455516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.455561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.461530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.461895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.461936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.467596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.467957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.467997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.473674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.474036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.474076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.479740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.480110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.480151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.485928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.486300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.486341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.492010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.492372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.492413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.498018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.498388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.498429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.504424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.504811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.504867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.510563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.510954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.511020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.516739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.517117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.698 [2024-09-28 01:38:01.517157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.698 [2024-09-28 01:38:01.522848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.698 [2024-09-28 01:38:01.523229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.523269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.699 [2024-09-28 01:38:01.528684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.699 [2024-09-28 01:38:01.529048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.529088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.699 [2024-09-28 01:38:01.534676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.699 [2024-09-28 01:38:01.535038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.535078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.699 [2024-09-28 01:38:01.540717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.699 [2024-09-28 01:38:01.541079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.541119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.699 [2024-09-28 01:38:01.546796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.699 [2024-09-28 01:38:01.547169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.547211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.699 [2024-09-28 01:38:01.552635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.699 [2024-09-28 01:38:01.553000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.553040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.699 [2024-09-28 01:38:01.558531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.699 [2024-09-28 01:38:01.558878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.558918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.699 [2024-09-28 01:38:01.564336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.699 [2024-09-28 01:38:01.564717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.564756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.699 [2024-09-28 01:38:01.570172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.699 [2024-09-28 01:38:01.570548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.570587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.699 [2024-09-28 01:38:01.576083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.699 [2024-09-28 01:38:01.576437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.576487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.699 [2024-09-28 01:38:01.581945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.699 [2024-09-28 01:38:01.582312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.582352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.699 [2024-09-28 01:38:01.587998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.699 [2024-09-28 01:38:01.588355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.588394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.699 [2024-09-28 01:38:01.593786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.699 [2024-09-28 01:38:01.594152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.594193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.699 [2024-09-28 01:38:01.599653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.699 [2024-09-28 01:38:01.599992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.600031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.699 [2024-09-28 01:38:01.605464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.699 [2024-09-28 01:38:01.605825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.605864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.699 [2024-09-28 01:38:01.611561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.699 [2024-09-28 01:38:01.611910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.611950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.699 [2024-09-28 01:38:01.617358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.699 [2024-09-28 01:38:01.617742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.617782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.699 [2024-09-28 01:38:01.623221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.699 [2024-09-28 01:38:01.623611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.699 [2024-09-28 01:38:01.623649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.960 [2024-09-28 01:38:01.629660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.960 [2024-09-28 01:38:01.630065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.960 [2024-09-28 01:38:01.630137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.960 [2024-09-28 01:38:01.636188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.960 [2024-09-28 01:38:01.636567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.960 [2024-09-28 01:38:01.636607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.960 [2024-09-28 01:38:01.642076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.960 [2024-09-28 01:38:01.642436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.960 [2024-09-28 01:38:01.642487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.960 [2024-09-28 01:38:01.648163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.960 [2024-09-28 01:38:01.648528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.960 [2024-09-28 01:38:01.648568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.960 [2024-09-28 01:38:01.654057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.960 [2024-09-28 01:38:01.654420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.960 [2024-09-28 01:38:01.654471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.960 [2024-09-28 01:38:01.659876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.960 [2024-09-28 01:38:01.660239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.960 [2024-09-28 01:38:01.660279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.960 [2024-09-28 01:38:01.665749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.960 [2024-09-28 01:38:01.666122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.960 [2024-09-28 01:38:01.666161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.960 [2024-09-28 01:38:01.671620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.960 [2024-09-28 01:38:01.671980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.960 [2024-09-28 01:38:01.672021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.960 [2024-09-28 01:38:01.677473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.960 [2024-09-28 01:38:01.677839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.960 [2024-09-28 01:38:01.677878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.960 [2024-09-28 01:38:01.683527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.960 [2024-09-28 01:38:01.683889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.960 [2024-09-28 01:38:01.683929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.960 [2024-09-28 01:38:01.689496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.960 [2024-09-28 01:38:01.689856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.960 [2024-09-28 01:38:01.689897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.960 [2024-09-28 01:38:01.695552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.960 [2024-09-28 01:38:01.695903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.960 [2024-09-28 01:38:01.695944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.960 [2024-09-28 01:38:01.701413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.960 [2024-09-28 01:38:01.701792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.960 [2024-09-28 01:38:01.701833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.960 [2024-09-28 01:38:01.707469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.960 [2024-09-28 01:38:01.707829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.960 [2024-09-28 01:38:01.707869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.960 [2024-09-28 01:38:01.713326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.960 [2024-09-28 01:38:01.713707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.960 [2024-09-28 01:38:01.713747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.960 [2024-09-28 01:38:01.719167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.960 [2024-09-28 01:38:01.719565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.719597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.725070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.725434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.725483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.731146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.731524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.731576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.737133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.737498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.737551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.742953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.743314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.743353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.748708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.749075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.749115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.754642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.755007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.755048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.760538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.760907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.760947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.766314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.766695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.766734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.772160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.772562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.772599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.777885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.778247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.778288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.783787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.784151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.784192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.789696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.790046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.790086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.795558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.795906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.795946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.801303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.801674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.801715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.807090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.807430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.807479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.812979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.813339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.813380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.818796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.819175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.819216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.824626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.824990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.825031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.830541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.830906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.830946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.836350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.836727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.836767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.842150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.842524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.842563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.848122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.848478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.848530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.853892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.854257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.854298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.859888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.860250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.860290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.865722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.866085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.866125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.871646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.961 [2024-09-28 01:38:01.872007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.961 [2024-09-28 01:38:01.872047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.961 [2024-09-28 01:38:01.877530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.962 [2024-09-28 01:38:01.877892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.962 [2024-09-28 01:38:01.877931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.962 [2024-09-28 01:38:01.883523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.962 [2024-09-28 01:38:01.883896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.962 [2024-09-28 01:38:01.883936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.962 [2024-09-28 01:38:01.889951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:05.962 [2024-09-28 01:38:01.890320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.962 [2024-09-28 01:38:01.890360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.221 [2024-09-28 01:38:01.896370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.896828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.896871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:01.902412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.902785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.902826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:01.908336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.908703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.908743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:01.914113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.914476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.914526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:01.919932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.920295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.920337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:01.925851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.926218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.926258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:01.931740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.932102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.932141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:01.937613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.937976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.938015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:01.943618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.943999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.944039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:01.949630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.949991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.950031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:01.955640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.956007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.956047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:01.961524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.961886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.961927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:01.967404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.967797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.967837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:01.973365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.973749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.973791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:01.979270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.979669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.979709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:01.985094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.985461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.985511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:01.990968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.991350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.991390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:01.996750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:01.997112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:01.997153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:02.002578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:02.002945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:02.002994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:02.008484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:02.008830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:02.008870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:02.014299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:02.014672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:02.014713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:02.020204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:02.020607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:02.020649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:02.026205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:02.026581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:02.026621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:02.032141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:02.032507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:02.032547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:02.038031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:02.038391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:02.038431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:02.043981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:02.044366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:02.044407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.222 [2024-09-28 01:38:02.049811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.222 [2024-09-28 01:38:02.050170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.222 [2024-09-28 01:38:02.050210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.223 [2024-09-28 01:38:02.055745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.223 [2024-09-28 01:38:02.056108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.223 [2024-09-28 01:38:02.056148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.223 [2024-09-28 01:38:02.061664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.223 [2024-09-28 01:38:02.062023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.223 [2024-09-28 01:38:02.062063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.223 [2024-09-28 01:38:02.067732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.223 [2024-09-28 01:38:02.068094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.223 [2024-09-28 01:38:02.068135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.223 [2024-09-28 01:38:02.073563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.223 [2024-09-28 01:38:02.073923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.223 [2024-09-28 01:38:02.073964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.223 [2024-09-28 01:38:02.079674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.223 [2024-09-28 01:38:02.080040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.223 [2024-09-28 01:38:02.080080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.223 [2024-09-28 01:38:02.085553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.223 [2024-09-28 01:38:02.085902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.223 [2024-09-28 01:38:02.085942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.223 [2024-09-28 01:38:02.091762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.223 [2024-09-28 01:38:02.092127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.223 [2024-09-28 01:38:02.092167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.223 [2024-09-28 01:38:02.098020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.223 [2024-09-28 01:38:02.098388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.223 [2024-09-28 01:38:02.098429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.223 [2024-09-28 01:38:02.104807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.223 [2024-09-28 01:38:02.105217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.223 [2024-09-28 01:38:02.105259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.223 [2024-09-28 01:38:02.111966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.223 [2024-09-28 01:38:02.112356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.223 [2024-09-28 01:38:02.112399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.223 [2024-09-28 01:38:02.118683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.223 [2024-09-28 01:38:02.119155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.223 [2024-09-28 01:38:02.119199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.223 [2024-09-28 01:38:02.125399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.223 [2024-09-28 01:38:02.125887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.223 [2024-09-28 01:38:02.125928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.223 [2024-09-28 01:38:02.132032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.223 [2024-09-28 01:38:02.132395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.223 [2024-09-28 01:38:02.132436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.223 [2024-09-28 01:38:02.138499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.223 [2024-09-28 01:38:02.138921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.223 [2024-09-28 01:38:02.138961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.223 [2024-09-28 01:38:02.144998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.223 [2024-09-28 01:38:02.145359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.223 [2024-09-28 01:38:02.145399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.223 [2024-09-28 01:38:02.151499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.223 [2024-09-28 01:38:02.151951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.223 [2024-09-28 01:38:02.152006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.483 [2024-09-28 01:38:02.158064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.483 [2024-09-28 01:38:02.158465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.483 [2024-09-28 01:38:02.158518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.483 [2024-09-28 01:38:02.164035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.483 [2024-09-28 01:38:02.164399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.483 [2024-09-28 01:38:02.164440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.483 [2024-09-28 01:38:02.169937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.483 [2024-09-28 01:38:02.170310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.483 [2024-09-28 01:38:02.170351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.483 [2024-09-28 01:38:02.175997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.483 [2024-09-28 01:38:02.176358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.483 [2024-09-28 01:38:02.176399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.483 [2024-09-28 01:38:02.181838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.483 [2024-09-28 01:38:02.182236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.483 [2024-09-28 01:38:02.182277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.483 [2024-09-28 01:38:02.187912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.483 [2024-09-28 01:38:02.188280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.483 [2024-09-28 01:38:02.188321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.483 [2024-09-28 01:38:02.193773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.483 [2024-09-28 01:38:02.194136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.483 [2024-09-28 01:38:02.194176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.483 [2024-09-28 01:38:02.199595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.483 [2024-09-28 01:38:02.199977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.483 [2024-09-28 01:38:02.200017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.483 [2024-09-28 01:38:02.205557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.483 [2024-09-28 01:38:02.205914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.483 [2024-09-28 01:38:02.205953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.483 [2024-09-28 01:38:02.211431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.483 [2024-09-28 01:38:02.211812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.483 [2024-09-28 01:38:02.211853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.483 [2024-09-28 01:38:02.217424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.483 [2024-09-28 01:38:02.217809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.483 [2024-09-28 01:38:02.217851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.483 [2024-09-28 01:38:02.223466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.483 [2024-09-28 01:38:02.223854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.483 [2024-09-28 01:38:02.223895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.483 [2024-09-28 01:38:02.229372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.483 [2024-09-28 01:38:02.229753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.483 [2024-09-28 01:38:02.229793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.483 [2024-09-28 01:38:02.235202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.483 [2024-09-28 01:38:02.235607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.483 [2024-09-28 01:38:02.235646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.483 [2024-09-28 01:38:02.240995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.483 [2024-09-28 01:38:02.241356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.483 [2024-09-28 01:38:02.241395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.246869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.247250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.247290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.252687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.253050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.253090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.258475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.258838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.258878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.264255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.264632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.264673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.270055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.270419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.270468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.275938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.276307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.276347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.281766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.282129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.282170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.287695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.288081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.288122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.293740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.294089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.294129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.299644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.300005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.300045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.305505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.305855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.305895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.311340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.311744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.311785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.317197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.317591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.317632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.323031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.323374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.323415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.328818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.329181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.329222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.334671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.335049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.335090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.340710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.341070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.341110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.346552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.346904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.346944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.352344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.352723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.352763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.358143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.358517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.358557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.364061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.364413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.364463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.369891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.370257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.370297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.375787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.376148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.376188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.381648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.381998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.382039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.387538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.387901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.387942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.393379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.393755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.393795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.484 [2024-09-28 01:38:02.399228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.484 [2024-09-28 01:38:02.399619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.484 [2024-09-28 01:38:02.399657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.485 [2024-09-28 01:38:02.404999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.485 [2024-09-28 01:38:02.405362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.485 [2024-09-28 01:38:02.405403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.485 [2024-09-28 01:38:02.411031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.485 [2024-09-28 01:38:02.411430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.485 [2024-09-28 01:38:02.411481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.744 [2024-09-28 01:38:02.417510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.744 [2024-09-28 01:38:02.417860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.744 [2024-09-28 01:38:02.417900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.744 [2024-09-28 01:38:02.423766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.744 [2024-09-28 01:38:02.424128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.744 [2024-09-28 01:38:02.424169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.744 [2024-09-28 01:38:02.429694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.744 [2024-09-28 01:38:02.430058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.744 [2024-09-28 01:38:02.430097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.744 [2024-09-28 01:38:02.435722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.744 [2024-09-28 01:38:02.436086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.744 [2024-09-28 01:38:02.436127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.744 [2024-09-28 01:38:02.441597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:06.744 [2024-09-28 01:38:02.441963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.744 [2024-09-28 01:38:02.442003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.744 5135.50 IOPS, 641.94 MiB/s 00:25:06.744 Latency(us) 00:25:06.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.744 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:06.744 nvme0n1 : 2.00 5135.19 641.90 0.00 0.00 3108.36 1630.95 7328.12 00:25:06.744 =================================================================================================================== 00:25:06.744 Total : 5135.19 641.90 0.00 0.00 3108.36 1630.95 7328.12 00:25:06.744 { 00:25:06.744 "results": [ 00:25:06.744 { 00:25:06.744 "job": "nvme0n1", 00:25:06.744 "core_mask": "0x2", 00:25:06.744 "workload": "randwrite", 00:25:06.744 "status": "finished", 00:25:06.744 "queue_depth": 16, 00:25:06.744 "io_size": 131072, 00:25:06.744 "runtime": 2.00421, 00:25:06.744 "iops": 5135.190424157149, 00:25:06.744 "mibps": 641.8988030196437, 00:25:06.744 "io_failed": 0, 00:25:06.744 "io_timeout": 0, 00:25:06.744 "avg_latency_us": 3108.3557446207114, 00:25:06.744 "min_latency_us": 1630.9527272727273, 00:25:06.744 "max_latency_us": 7328.1163636363635 00:25:06.744 } 00:25:06.744 ], 00:25:06.744 "core_count": 1 00:25:06.744 } 00:25:06.744 01:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:06.744 01:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:06.744 01:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:06.744 | .driver_specific 00:25:06.744 | .nvme_error 00:25:06.744 | .status_code 00:25:06.744 | .command_transient_transport_error' 00:25:06.744 01:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:07.004 01:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 331 > 0 )) 00:25:07.004 01:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86557 00:25:07.004 01:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 86557 ']' 00:25:07.004 01:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 86557 00:25:07.004 01:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:25:07.004 01:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:07.004 01:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86557 00:25:07.004 01:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:07.004 01:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:07.004 killing process with pid 86557 00:25:07.004 Received shutdown signal, test time was about 2.000000 seconds 00:25:07.004 00:25:07.004 Latency(us) 00:25:07.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.004 =================================================================================================================== 00:25:07.004 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.004 01:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86557' 00:25:07.004 01:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 86557 00:25:07.004 01:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 86557 00:25:07.941 01:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 86324 00:25:07.941 01:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 86324 ']' 00:25:07.941 01:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 86324 00:25:07.941 01:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:25:07.941 01:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:07.941 01:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86324 00:25:07.941 01:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:07.941 killing process with pid 86324 00:25:07.941 01:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:07.941 01:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86324' 00:25:07.941 01:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 86324 00:25:07.941 01:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 86324 00:25:08.879 00:25:08.879 real 0m22.159s 00:25:08.879 user 0m42.492s 00:25:08.879 sys 0m4.654s 00:25:08.879 ************************************ 00:25:08.879 END TEST nvmf_digest_error 00:25:08.879 ************************************ 00:25:08.879 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:08.879 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:08.879 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:08.879 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:08.879 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:08.879 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:08.879 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:08.879 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:08.879 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:08.879 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:08.879 rmmod nvme_tcp 00:25:08.879 rmmod nvme_fabrics 00:25:08.879 rmmod nvme_keyring 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 86324 ']' 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 86324 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 86324 ']' 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 86324 00:25:09.139 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86324) - No such process 00:25:09.139 Process with pid 86324 is not found 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 86324 is not found' 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:09.139 01:38:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:09.139 01:38:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:09.139 01:38:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:09.139 01:38:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:09.139 01:38:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.139 01:38:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.139 01:38:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:25:09.400 00:25:09.400 real 0m46.156s 00:25:09.400 user 1m26.736s 00:25:09.400 sys 0m9.563s 00:25:09.400 ************************************ 00:25:09.400 END TEST nvmf_digest 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:09.400 ************************************ 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.400 ************************************ 00:25:09.400 START TEST nvmf_host_multipath 00:25:09.400 ************************************ 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:09.400 * Looking for test storage... 00:25:09.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:09.400 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:09.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.400 --rc genhtml_branch_coverage=1 00:25:09.400 --rc genhtml_function_coverage=1 00:25:09.400 --rc genhtml_legend=1 00:25:09.400 --rc geninfo_all_blocks=1 00:25:09.400 --rc geninfo_unexecuted_blocks=1 00:25:09.400 00:25:09.401 ' 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:09.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.401 --rc genhtml_branch_coverage=1 00:25:09.401 --rc genhtml_function_coverage=1 00:25:09.401 --rc genhtml_legend=1 00:25:09.401 --rc geninfo_all_blocks=1 00:25:09.401 --rc geninfo_unexecuted_blocks=1 00:25:09.401 00:25:09.401 ' 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:09.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.401 --rc genhtml_branch_coverage=1 00:25:09.401 --rc genhtml_function_coverage=1 00:25:09.401 --rc genhtml_legend=1 00:25:09.401 --rc geninfo_all_blocks=1 00:25:09.401 --rc geninfo_unexecuted_blocks=1 00:25:09.401 00:25:09.401 ' 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:09.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.401 --rc genhtml_branch_coverage=1 00:25:09.401 --rc genhtml_function_coverage=1 00:25:09.401 --rc genhtml_legend=1 00:25:09.401 --rc geninfo_all_blocks=1 00:25:09.401 --rc geninfo_unexecuted_blocks=1 00:25:09.401 00:25:09.401 ' 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:09.401 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:09.401 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:09.661 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:09.661 Cannot find device "nvmf_init_br" 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:09.662 Cannot find device "nvmf_init_br2" 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:09.662 Cannot find device "nvmf_tgt_br" 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:09.662 Cannot find device "nvmf_tgt_br2" 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:09.662 Cannot find device "nvmf_init_br" 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:09.662 Cannot find device "nvmf_init_br2" 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:09.662 Cannot find device "nvmf_tgt_br" 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:09.662 Cannot find device "nvmf_tgt_br2" 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:09.662 Cannot find device "nvmf_br" 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:09.662 Cannot find device "nvmf_init_if" 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:09.662 Cannot find device "nvmf_init_if2" 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:09.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:09.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:09.662 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:09.922 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:09.922 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:25:09.922 00:25:09.922 --- 10.0.0.3 ping statistics --- 00:25:09.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.922 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:09.922 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:09.922 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.028 ms 00:25:09.922 00:25:09.922 --- 10.0.0.4 ping statistics --- 00:25:09.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.922 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:09.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:09.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:25:09.922 00:25:09.922 --- 10.0.0.1 ping statistics --- 00:25:09.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.922 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:09.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:09.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:25:09.922 00:25:09.922 --- 10.0.0.2 ping statistics --- 00:25:09.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.922 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # return 0 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # nvmfpid=86900 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # waitforlisten 86900 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 86900 ']' 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:09.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:09.922 01:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:09.922 [2024-09-28 01:38:05.829918] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:09.922 [2024-09-28 01:38:05.830078] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.181 [2024-09-28 01:38:06.005859] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:10.440 [2024-09-28 01:38:06.238324] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.440 [2024-09-28 01:38:06.238400] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.440 [2024-09-28 01:38:06.238434] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.440 [2024-09-28 01:38:06.238468] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.440 [2024-09-28 01:38:06.238487] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.440 [2024-09-28 01:38:06.238685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.440 [2024-09-28 01:38:06.238811] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.699 [2024-09-28 01:38:06.423624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:10.958 01:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:10.958 01:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:25:10.958 01:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:10.958 01:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:10.958 01:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:10.958 01:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.958 01:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=86900 00:25:10.958 01:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:11.217 [2024-09-28 01:38:06.970397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.217 01:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:11.477 Malloc0 00:25:11.477 01:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:11.736 01:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:11.995 01:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:12.254 [2024-09-28 01:38:08.059814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:12.254 01:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:12.514 [2024-09-28 01:38:08.279874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:12.514 01:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:12.514 01:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=86956 00:25:12.514 01:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:12.514 01:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 86956 /var/tmp/bdevperf.sock 00:25:12.514 01:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 86956 ']' 00:25:12.514 01:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:12.514 01:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:12.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:12.514 01:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:12.514 01:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:12.514 01:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:13.450 01:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:13.450 01:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:25:13.450 01:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:13.709 01:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:13.967 Nvme0n1 00:25:13.967 01:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:14.309 Nvme0n1 00:25:14.309 01:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:25:14.309 01:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:15.274 01:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:25:15.274 01:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:15.533 01:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:15.790 01:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:25:15.790 01:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86900 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:15.790 01:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87000 00:25:15.790 01:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:22.359 01:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:22.359 01:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:22.359 01:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:22.359 01:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:22.359 Attaching 4 probes... 00:25:22.359 @path[10.0.0.3, 4421]: 16092 00:25:22.359 @path[10.0.0.3, 4421]: 16540 00:25:22.359 @path[10.0.0.3, 4421]: 16524 00:25:22.359 @path[10.0.0.3, 4421]: 16299 00:25:22.359 @path[10.0.0.3, 4421]: 16306 00:25:22.359 01:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:22.359 01:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:22.359 01:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:22.359 01:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:22.359 01:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:22.359 01:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:22.359 01:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87000 00:25:22.359 01:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:22.359 01:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:25:22.359 01:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:22.359 01:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:22.618 01:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:25:22.618 01:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87115 00:25:22.618 01:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86900 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:22.618 01:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:29.187 01:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:29.187 01:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:29.187 01:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:29.187 01:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:29.187 Attaching 4 probes... 00:25:29.187 @path[10.0.0.3, 4420]: 15871 00:25:29.187 @path[10.0.0.3, 4420]: 15912 00:25:29.187 @path[10.0.0.3, 4420]: 16174 00:25:29.187 @path[10.0.0.3, 4420]: 16152 00:25:29.187 @path[10.0.0.3, 4420]: 16315 00:25:29.187 01:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:29.187 01:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:29.187 01:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:29.187 01:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:29.187 01:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:29.187 01:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:29.187 01:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87115 00:25:29.187 01:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:29.187 01:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:25:29.187 01:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:29.446 01:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:29.705 01:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:25:29.705 01:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87226 00:25:29.705 01:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:29.705 01:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86900 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:36.274 01:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:36.274 01:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:36.274 01:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:36.274 01:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:36.274 Attaching 4 probes... 00:25:36.274 @path[10.0.0.3, 4421]: 11490 00:25:36.274 @path[10.0.0.3, 4421]: 16083 00:25:36.274 @path[10.0.0.3, 4421]: 16041 00:25:36.274 @path[10.0.0.3, 4421]: 16000 00:25:36.274 @path[10.0.0.3, 4421]: 15960 00:25:36.274 01:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:36.274 01:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:36.274 01:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:36.274 01:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:36.274 01:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:36.274 01:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:36.274 01:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87226 00:25:36.274 01:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:36.274 01:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:25:36.274 01:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:36.274 01:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:36.533 01:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:25:36.533 01:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87340 00:25:36.533 01:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:36.533 01:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86900 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:43.102 01:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:43.102 01:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:25:43.102 01:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:25:43.102 01:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:43.102 Attaching 4 probes... 00:25:43.102 00:25:43.102 00:25:43.102 00:25:43.102 00:25:43.102 00:25:43.102 01:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:43.102 01:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:43.102 01:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:43.102 01:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:25:43.102 01:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:25:43.102 01:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:25:43.102 01:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87340 00:25:43.102 01:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:43.102 01:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:25:43.102 01:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:43.103 01:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:43.362 01:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:25:43.362 01:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86900 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:43.362 01:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87453 00:25:43.362 01:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:49.932 01:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:49.932 01:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:49.932 01:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:49.932 01:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:49.932 Attaching 4 probes... 00:25:49.932 @path[10.0.0.3, 4421]: 15520 00:25:49.932 @path[10.0.0.3, 4421]: 15868 00:25:49.932 @path[10.0.0.3, 4421]: 15769 00:25:49.932 @path[10.0.0.3, 4421]: 15789 00:25:49.932 @path[10.0.0.3, 4421]: 15776 00:25:49.932 01:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:49.932 01:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:49.932 01:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:49.932 01:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:49.932 01:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:49.932 01:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:49.932 01:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87453 00:25:49.932 01:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:49.932 01:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:49.932 01:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:25:50.869 01:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:25:50.869 01:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87571 00:25:50.869 01:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86900 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:50.869 01:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:57.438 01:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:57.438 01:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:57.438 01:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:57.438 01:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:57.438 Attaching 4 probes... 00:25:57.438 @path[10.0.0.3, 4420]: 15172 00:25:57.438 @path[10.0.0.3, 4420]: 15611 00:25:57.438 @path[10.0.0.3, 4420]: 15392 00:25:57.438 @path[10.0.0.3, 4420]: 15435 00:25:57.438 @path[10.0.0.3, 4420]: 15407 00:25:57.438 01:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:57.438 01:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:57.438 01:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:57.438 01:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:57.438 01:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:57.438 01:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:57.438 01:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87571 00:25:57.438 01:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:57.438 01:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:57.438 [2024-09-28 01:38:53.169056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:57.438 01:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:57.696 01:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:26:04.261 01:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:26:04.261 01:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87746 00:26:04.261 01:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86900 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:04.261 01:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:09.578 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:09.578 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:09.838 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:09.838 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:10.096 Attaching 4 probes... 00:26:10.096 @path[10.0.0.3, 4421]: 14898 00:26:10.096 @path[10.0.0.3, 4421]: 15219 00:26:10.096 @path[10.0.0.3, 4421]: 14565 00:26:10.096 @path[10.0.0.3, 4421]: 15179 00:26:10.096 @path[10.0.0.3, 4421]: 15564 00:26:10.096 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:10.096 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:10.096 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:10.096 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:10.096 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:10.096 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:10.096 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87746 00:26:10.096 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:10.096 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 86956 00:26:10.096 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 86956 ']' 00:26:10.096 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 86956 00:26:10.096 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:26:10.096 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:10.096 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86956 00:26:10.096 killing process with pid 86956 00:26:10.096 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:10.096 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:10.097 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86956' 00:26:10.097 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 86956 00:26:10.097 01:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 86956 00:26:10.097 { 00:26:10.097 "results": [ 00:26:10.097 { 00:26:10.097 "job": "Nvme0n1", 00:26:10.097 "core_mask": "0x4", 00:26:10.097 "workload": "verify", 00:26:10.097 "status": "terminated", 00:26:10.097 "verify_range": { 00:26:10.097 "start": 0, 00:26:10.097 "length": 16384 00:26:10.097 }, 00:26:10.097 "queue_depth": 128, 00:26:10.097 "io_size": 4096, 00:26:10.097 "runtime": 55.527872, 00:26:10.097 "iops": 6742.271700957674, 00:26:10.097 "mibps": 26.336998831865913, 00:26:10.097 "io_failed": 0, 00:26:10.097 "io_timeout": 0, 00:26:10.097 "avg_latency_us": 18960.723765506682, 00:26:10.097 "min_latency_us": 726.1090909090909, 00:26:10.097 "max_latency_us": 7046430.72 00:26:10.097 } 00:26:10.097 ], 00:26:10.097 "core_count": 1 00:26:10.097 } 00:26:11.041 01:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 86956 00:26:11.041 01:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:11.041 [2024-09-28 01:38:08.373786] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:26:11.041 [2024-09-28 01:38:08.373917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86956 ] 00:26:11.041 [2024-09-28 01:38:08.531402] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.041 [2024-09-28 01:38:08.686464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:11.041 [2024-09-28 01:38:08.837264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:11.041 [2024-09-28 01:38:10.149864] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:26:11.041 Running I/O for 90 seconds... 00:26:11.041 8437.00 IOPS, 32.96 MiB/s 8398.00 IOPS, 32.80 MiB/s 8361.33 IOPS, 32.66 MiB/s 8335.00 IOPS, 32.56 MiB/s 8308.00 IOPS, 32.45 MiB/s 8292.67 IOPS, 32.39 MiB/s 8273.71 IOPS, 32.32 MiB/s 8237.50 IOPS, 32.18 MiB/s [2024-09-28 01:38:18.528193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.041 [2024-09-28 01:38:18.528278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:11.041 [2024-09-28 01:38:18.528356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.041 [2024-09-28 01:38:18.528384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:11.041 [2024-09-28 01:38:18.528414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.041 [2024-09-28 01:38:18.528435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:11.041 [2024-09-28 01:38:18.528494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.041 [2024-09-28 01:38:18.528518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:11.041 [2024-09-28 01:38:18.528546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.041 [2024-09-28 01:38:18.528566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:11.041 [2024-09-28 01:38:18.528593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.041 [2024-09-28 01:38:18.528615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:11.041 [2024-09-28 01:38:18.528642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.041 [2024-09-28 01:38:18.528662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.528689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.042 [2024-09-28 01:38:18.528709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.528737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-09-28 01:38:18.528757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.528811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-09-28 01:38:18.528833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.528876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-09-28 01:38:18.528897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.528923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-09-28 01:38:18.528943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.528969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-09-28 01:38:18.528989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-09-28 01:38:18.529035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-09-28 01:38:18.529082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-09-28 01:38:18.529127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-09-28 01:38:18.529174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-09-28 01:38:18.529220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-09-28 01:38:18.529267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-09-28 01:38:18.529313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-09-28 01:38:18.529359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-09-28 01:38:18.529415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-09-28 01:38:18.529474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-09-28 01:38:18.529544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.042 [2024-09-28 01:38:18.529601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.042 [2024-09-28 01:38:18.529651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.042 [2024-09-28 01:38:18.529700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.042 [2024-09-28 01:38:18.529747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.042 [2024-09-28 01:38:18.529795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.042 [2024-09-28 01:38:18.529842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.042 [2024-09-28 01:38:18.529904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.042 [2024-09-28 01:38:18.529950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.529976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.042 [2024-09-28 01:38:18.529995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.530022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.042 [2024-09-28 01:38:18.530051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.530081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.042 [2024-09-28 01:38:18.530102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.530128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.042 [2024-09-28 01:38:18.530148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:11.042 [2024-09-28 01:38:18.530174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.042 [2024-09-28 01:38:18.530194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.530220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.043 [2024-09-28 01:38:18.530239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.530265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.043 [2024-09-28 01:38:18.530285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.530311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.043 [2024-09-28 01:38:18.530331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.530357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.043 [2024-09-28 01:38:18.530376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.530403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.043 [2024-09-28 01:38:18.530440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.530469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.043 [2024-09-28 01:38:18.530507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.530536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.043 [2024-09-28 01:38:18.530557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.530583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.043 [2024-09-28 01:38:18.530604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.530630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.043 [2024-09-28 01:38:18.530663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.530691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.043 [2024-09-28 01:38:18.530711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.530737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.043 [2024-09-28 01:38:18.530758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.530784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.043 [2024-09-28 01:38:18.530803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.530829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.043 [2024-09-28 01:38:18.530848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.530875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.043 [2024-09-28 01:38:18.530896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.530922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.043 [2024-09-28 01:38:18.530941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.530993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.043 [2024-09-28 01:38:18.531034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.531063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.043 [2024-09-28 01:38:18.531084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.531111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.043 [2024-09-28 01:38:18.531132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.531160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.043 [2024-09-28 01:38:18.531181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.531216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.043 [2024-09-28 01:38:18.531241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.531270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.043 [2024-09-28 01:38:18.531292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.531344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.043 [2024-09-28 01:38:18.531380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.531406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.043 [2024-09-28 01:38:18.531425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.531451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.043 [2024-09-28 01:38:18.531471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.531497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.043 [2024-09-28 01:38:18.531531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.531560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.043 [2024-09-28 01:38:18.531581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.531608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.043 [2024-09-28 01:38:18.531628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.531655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.043 [2024-09-28 01:38:18.531675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.531701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.043 [2024-09-28 01:38:18.531721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.531754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.043 [2024-09-28 01:38:18.531776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:11.043 [2024-09-28 01:38:18.531802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.043 [2024-09-28 01:38:18.531822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.531848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.044 [2024-09-28 01:38:18.531868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.531894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.044 [2024-09-28 01:38:18.531914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.531948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.044 [2024-09-28 01:38:18.531969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.531996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.044 [2024-09-28 01:38:18.532016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.532062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.532109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.532155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.532201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.532246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.532292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.532337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.532382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.532428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.532507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.532567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.532617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.532664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.532710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.532757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.532803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.044 [2024-09-28 01:38:18.532850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.044 [2024-09-28 01:38:18.532896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.044 [2024-09-28 01:38:18.532943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.532969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.044 [2024-09-28 01:38:18.532989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.533016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.044 [2024-09-28 01:38:18.533035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.533062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.044 [2024-09-28 01:38:18.533082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.533109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.044 [2024-09-28 01:38:18.533136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.533165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.044 [2024-09-28 01:38:18.533185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.533212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.533232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.533259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.533279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.533309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.533330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.533359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.533381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.533408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.533428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.533483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.533506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:11.044 [2024-09-28 01:38:18.533535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.044 [2024-09-28 01:38:18.533556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.535244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.045 [2024-09-28 01:38:18.535286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.535382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.535411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.535442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.535477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.535505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.535557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.535591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.535613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.535641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.535661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.535688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.535708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.535736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.535757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.535804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.535829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.535858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.535879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.535906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.535926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.535957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.535979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.536007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.536028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.536055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.536075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.536102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.536121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.536148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.536169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.536211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.536233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.536261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.536281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.536307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.536327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.536354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.536373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.536400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.536419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.536460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.536483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.536511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.536532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.536559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.536579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:18.536622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:18.536646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:11.045 8175.89 IOPS, 31.94 MiB/s 8158.30 IOPS, 31.87 MiB/s 8151.91 IOPS, 31.84 MiB/s 8149.25 IOPS, 31.83 MiB/s 8140.23 IOPS, 31.80 MiB/s 8137.07 IOPS, 31.79 MiB/s [2024-09-28 01:38:25.117484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:25.117585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:25.117713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:25.117742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:25.117774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:25.117819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:25.117850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:25.117872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:25.117900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:25.117951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:25.117978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:25.117998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:25.118025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.045 [2024-09-28 01:38:25.118062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:11.045 [2024-09-28 01:38:25.118090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.118110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.118138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.046 [2024-09-28 01:38:25.118159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.118188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.046 [2024-09-28 01:38:25.118209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.118251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.046 [2024-09-28 01:38:25.118271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.118299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.046 [2024-09-28 01:38:25.118334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.118359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.046 [2024-09-28 01:38:25.118378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.118404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.046 [2024-09-28 01:38:25.118424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.118450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.046 [2024-09-28 01:38:25.118469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.118507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.046 [2024-09-28 01:38:25.118528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.118556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.118576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.118635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.118656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.118683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.118704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.118730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.118751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.118777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.118797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.118823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.118843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.118872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.118892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.118919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.118940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.119024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.119052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.119082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.119114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.119158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.119179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.119237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.119276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.119305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.119326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.119356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.119378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.119409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.119432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.119461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.119483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.119531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.119554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.119584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.119621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.119651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.119673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:11.046 [2024-09-28 01:38:25.119733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.046 [2024-09-28 01:38:25.119754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.119782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.119812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.119841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.119877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.119904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.119925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.119970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.120000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.120030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-09-28 01:38:25.120052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.120096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-09-28 01:38:25.120136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.120185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-09-28 01:38:25.120208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.120238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-09-28 01:38:25.120259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.120288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-09-28 01:38:25.120310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.120340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-09-28 01:38:25.120378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.120422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-09-28 01:38:25.120443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.120486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-09-28 01:38:25.120507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.120541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.120564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.120609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.120633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.120663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.120684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.120711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.120742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.120787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.120807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.120834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.120856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.120883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.120902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.120945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.120965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.121009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.121030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.121058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.121079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.121123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.121144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.121174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.121195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.121224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.121245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.121275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.121298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.121326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.121362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.121405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.047 [2024-09-28 01:38:25.121426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.121477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-09-28 01:38:25.121514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.121542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-09-28 01:38:25.121590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.121624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-09-28 01:38:25.121645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.121691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-09-28 01:38:25.121712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:11.047 [2024-09-28 01:38:25.121741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-09-28 01:38:25.121762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.121791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.048 [2024-09-28 01:38:25.121813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.121842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.048 [2024-09-28 01:38:25.121863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.121891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.048 [2024-09-28 01:38:25.121913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.121942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.048 [2024-09-28 01:38:25.121979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.122024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.048 [2024-09-28 01:38:25.122045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.122088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.048 [2024-09-28 01:38:25.122115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.122142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.048 [2024-09-28 01:38:25.122163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.122200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.048 [2024-09-28 01:38:25.122238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.122267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.048 [2024-09-28 01:38:25.122287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.122331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.048 [2024-09-28 01:38:25.122352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.122380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.048 [2024-09-28 01:38:25.122401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.122451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.048 [2024-09-28 01:38:25.122475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.122504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.048 [2024-09-28 01:38:25.122525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.122567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.048 [2024-09-28 01:38:25.122606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.122635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.048 [2024-09-28 01:38:25.122656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.122700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.048 [2024-09-28 01:38:25.122720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.122747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.048 [2024-09-28 01:38:25.122768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.122796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.048 [2024-09-28 01:38:25.122816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.122844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.048 [2024-09-28 01:38:25.122864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.122890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.048 [2024-09-28 01:38:25.122919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.122948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.048 [2024-09-28 01:38:25.122997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.123046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.048 [2024-09-28 01:38:25.123069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.123099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.048 [2024-09-28 01:38:25.123121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.123153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.048 [2024-09-28 01:38:25.123177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.123208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.048 [2024-09-28 01:38:25.123230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.123260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.048 [2024-09-28 01:38:25.123297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.123357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.048 [2024-09-28 01:38:25.123392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.123419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.048 [2024-09-28 01:38:25.123439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.123465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.048 [2024-09-28 01:38:25.123485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.123512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.048 [2024-09-28 01:38:25.123532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:11.048 [2024-09-28 01:38:25.123573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.049 [2024-09-28 01:38:25.123593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.123620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.049 [2024-09-28 01:38:25.123648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.123676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.049 [2024-09-28 01:38:25.123697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.123724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.049 [2024-09-28 01:38:25.123744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.123771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.049 [2024-09-28 01:38:25.123791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.123818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.049 [2024-09-28 01:38:25.123838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.123864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.049 [2024-09-28 01:38:25.123899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.123927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.049 [2024-09-28 01:38:25.123948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.123975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.049 [2024-09-28 01:38:25.123995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.124022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.049 [2024-09-28 01:38:25.124043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.124069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.049 [2024-09-28 01:38:25.124089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.124116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.049 [2024-09-28 01:38:25.124136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.124976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.049 [2024-09-28 01:38:25.125011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.125054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:25.125089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.125126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:25.125147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.125181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:25.125201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.125234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:25.125254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.125287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:25.125308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.125341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:25.125361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.125395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:25.125421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.125502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:25.125530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.125565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:25.125586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.125619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:25.125639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.125672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:25.125692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.125724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:25.125745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.125778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:25.125798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.125843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:25.125865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.125898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:25.125918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:25.125952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:25.125972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:11.049 8030.87 IOPS, 31.37 MiB/s 7618.00 IOPS, 29.76 MiB/s 7640.47 IOPS, 29.85 MiB/s 7659.11 IOPS, 29.92 MiB/s 7676.63 IOPS, 29.99 MiB/s 7694.40 IOPS, 30.06 MiB/s 7712.00 IOPS, 30.12 MiB/s [2024-09-28 01:38:32.257848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:32.257934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:32.258012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:32.258039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:11.049 [2024-09-28 01:38:32.258070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.049 [2024-09-28 01:38:32.258091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.050 [2024-09-28 01:38:32.258138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.050 [2024-09-28 01:38:32.258184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.050 [2024-09-28 01:38:32.258230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.050 [2024-09-28 01:38:32.258276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.050 [2024-09-28 01:38:32.258322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.050 [2024-09-28 01:38:32.258386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.050 [2024-09-28 01:38:32.258436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.050 [2024-09-28 01:38:32.258503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.050 [2024-09-28 01:38:32.258552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.050 [2024-09-28 01:38:32.258597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.050 [2024-09-28 01:38:32.258643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.050 [2024-09-28 01:38:32.258689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.050 [2024-09-28 01:38:32.258736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.050 [2024-09-28 01:38:32.258783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.050 [2024-09-28 01:38:32.258830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.050 [2024-09-28 01:38:32.258877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.050 [2024-09-28 01:38:32.258923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.258949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.050 [2024-09-28 01:38:32.259006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.259040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.050 [2024-09-28 01:38:32.259062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.259090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.050 [2024-09-28 01:38:32.259111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.259138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.050 [2024-09-28 01:38:32.259159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.259187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.050 [2024-09-28 01:38:32.259208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.259236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.050 [2024-09-28 01:38:32.259257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.259299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.050 [2024-09-28 01:38:32.259320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.259361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.050 [2024-09-28 01:38:32.259381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.259407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.050 [2024-09-28 01:38:32.259427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.259470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.050 [2024-09-28 01:38:32.259491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.259531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.050 [2024-09-28 01:38:32.259555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.259592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.050 [2024-09-28 01:38:32.259617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.259652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.050 [2024-09-28 01:38:32.259675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.259714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.050 [2024-09-28 01:38:32.259737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:11.050 [2024-09-28 01:38:32.259764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.050 [2024-09-28 01:38:32.259785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.259813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.051 [2024-09-28 01:38:32.259834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.259861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.051 [2024-09-28 01:38:32.259881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.259909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.051 [2024-09-28 01:38:32.259929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.259957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.051 [2024-09-28 01:38:32.259977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.051 [2024-09-28 01:38:32.260025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.260961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.260982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.261008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.261028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.261054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.261075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.261103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.261124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.261151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.261171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.261198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.051 [2024-09-28 01:38:32.261218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.261245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.051 [2024-09-28 01:38:32.261264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.261291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.051 [2024-09-28 01:38:32.261311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.261338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.051 [2024-09-28 01:38:32.261358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.261385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.051 [2024-09-28 01:38:32.261406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.261432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.051 [2024-09-28 01:38:32.261466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:11.051 [2024-09-28 01:38:32.261496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.261527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.261557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.261577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.261637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.261658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.261685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.261705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.261731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.261752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.261779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.261799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.261825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.261844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.261871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.261891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.261919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.261939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.261966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.261986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.262032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.052 [2024-09-28 01:38:32.262079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.052 [2024-09-28 01:38:32.262126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.052 [2024-09-28 01:38:32.262183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.052 [2024-09-28 01:38:32.262230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.052 [2024-09-28 01:38:32.262277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.052 [2024-09-28 01:38:32.262323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.052 [2024-09-28 01:38:32.262369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.052 [2024-09-28 01:38:32.262416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.262524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.262574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.262621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.262667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.262716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.262763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.262825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.052 [2024-09-28 01:38:32.262873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:11.052 [2024-09-28 01:38:32.262899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.053 [2024-09-28 01:38:32.262919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.262946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.053 [2024-09-28 01:38:32.262993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.263024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.053 [2024-09-28 01:38:32.263047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.263075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.053 [2024-09-28 01:38:32.263095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.263122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.053 [2024-09-28 01:38:32.263142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.263170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.053 [2024-09-28 01:38:32.263190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.263217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.053 [2024-09-28 01:38:32.263238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.263265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.053 [2024-09-28 01:38:32.263285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.263312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.053 [2024-09-28 01:38:32.263347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.263374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.053 [2024-09-28 01:38:32.263410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.263438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.053 [2024-09-28 01:38:32.263467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.263512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.053 [2024-09-28 01:38:32.263535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.263563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.053 [2024-09-28 01:38:32.263583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.263609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.053 [2024-09-28 01:38:32.263629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.263657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.053 [2024-09-28 01:38:32.263677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.264499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.053 [2024-09-28 01:38:32.264536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.264579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.053 [2024-09-28 01:38:32.264603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.264638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.053 [2024-09-28 01:38:32.264659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.264692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.053 [2024-09-28 01:38:32.264713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.264748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.053 [2024-09-28 01:38:32.264769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.264802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.053 [2024-09-28 01:38:32.264822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.264855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.053 [2024-09-28 01:38:32.264875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.264908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.053 [2024-09-28 01:38:32.264943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.264998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.053 [2024-09-28 01:38:32.265024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.265059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.053 [2024-09-28 01:38:32.265080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.265113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.053 [2024-09-28 01:38:32.265133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.265167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.053 [2024-09-28 01:38:32.265187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.265222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.053 [2024-09-28 01:38:32.265244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.265277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.053 [2024-09-28 01:38:32.265298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.265330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.053 [2024-09-28 01:38:32.265351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.265384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.053 [2024-09-28 01:38:32.265404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:11.053 [2024-09-28 01:38:32.265437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.053 [2024-09-28 01:38:32.265471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:11.054 7699.64 IOPS, 30.08 MiB/s 7364.87 IOPS, 28.77 MiB/s 7058.00 IOPS, 27.57 MiB/s 6775.68 IOPS, 26.47 MiB/s 6515.08 IOPS, 25.45 MiB/s 6273.78 IOPS, 24.51 MiB/s 6049.71 IOPS, 23.63 MiB/s 5848.76 IOPS, 22.85 MiB/s 5913.80 IOPS, 23.10 MiB/s 5975.94 IOPS, 23.34 MiB/s 6036.44 IOPS, 23.58 MiB/s 6092.79 IOPS, 23.80 MiB/s 6147.00 IOPS, 24.01 MiB/s 6196.29 IOPS, 24.20 MiB/s [2024-09-28 01:38:45.627905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.054 [2024-09-28 01:38:45.627972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.054 [2024-09-28 01:38:45.628112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.054 [2024-09-28 01:38:45.628168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.054 [2024-09-28 01:38:45.628216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.054 [2024-09-28 01:38:45.628263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.054 [2024-09-28 01:38:45.628310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.054 [2024-09-28 01:38:45.628356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.054 [2024-09-28 01:38:45.628405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.054 [2024-09-28 01:38:45.628517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.054 [2024-09-28 01:38:45.628560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.054 [2024-09-28 01:38:45.628596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.054 [2024-09-28 01:38:45.628632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.054 [2024-09-28 01:38:45.628667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.054 [2024-09-28 01:38:45.628701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.054 [2024-09-28 01:38:45.628750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.054 [2024-09-28 01:38:45.628787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.054 [2024-09-28 01:38:45.628822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.054 [2024-09-28 01:38:45.628859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.054 [2024-09-28 01:38:45.628895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.054 [2024-09-28 01:38:45.628930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.054 [2024-09-28 01:38:45.628966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.628984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.054 [2024-09-28 01:38:45.629002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.629020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.054 [2024-09-28 01:38:45.629037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.629055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.054 [2024-09-28 01:38:45.629072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.629090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.054 [2024-09-28 01:38:45.629106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.629124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.054 [2024-09-28 01:38:45.629141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.629160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.054 [2024-09-28 01:38:45.629202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.629223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.054 [2024-09-28 01:38:45.629241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.629259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.054 [2024-09-28 01:38:45.629277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.629295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.054 [2024-09-28 01:38:45.629312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.629330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.054 [2024-09-28 01:38:45.629348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.629367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.054 [2024-09-28 01:38:45.629384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.629402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.054 [2024-09-28 01:38:45.629419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.054 [2024-09-28 01:38:45.629439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.629457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.629493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.629511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.629530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.629548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.629569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.629587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.629605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.629623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.629641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.629659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.629685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.629704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.629723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.629740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.629758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.629793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.629813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.629831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.629849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.629867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.629886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.629903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.629922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.629939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.629957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.629975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.629994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.630011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.055 [2024-09-28 01:38:45.630047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.055 [2024-09-28 01:38:45.630084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.055 [2024-09-28 01:38:45.630120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.055 [2024-09-28 01:38:45.630164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.055 [2024-09-28 01:38:45.630204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.055 [2024-09-28 01:38:45.630240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.055 [2024-09-28 01:38:45.630276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.055 [2024-09-28 01:38:45.630312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.055 [2024-09-28 01:38:45.630348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.055 [2024-09-28 01:38:45.630384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.055 [2024-09-28 01:38:45.630420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.055 [2024-09-28 01:38:45.630473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.055 [2024-09-28 01:38:45.630511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.055 [2024-09-28 01:38:45.630548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.055 [2024-09-28 01:38:45.630585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.055 [2024-09-28 01:38:45.630620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.630664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.630704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.630740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.055 [2024-09-28 01:38:45.630777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.055 [2024-09-28 01:38:45.630796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.056 [2024-09-28 01:38:45.630813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.630832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.056 [2024-09-28 01:38:45.630849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.630868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.056 [2024-09-28 01:38:45.630885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.630904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.056 [2024-09-28 01:38:45.630922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.630940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.630967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.056 [2024-09-28 01:38:45.631970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.631989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.056 [2024-09-28 01:38:45.632007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.632027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.056 [2024-09-28 01:38:45.632045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.632072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.056 [2024-09-28 01:38:45.632090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.632262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.056 [2024-09-28 01:38:45.632294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.632319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.056 [2024-09-28 01:38:45.632340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.632548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.056 [2024-09-28 01:38:45.632587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.632646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.056 [2024-09-28 01:38:45.632668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.056 [2024-09-28 01:38:45.632689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.057 [2024-09-28 01:38:45.632708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.632729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.057 [2024-09-28 01:38:45.632748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.632784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.057 [2024-09-28 01:38:45.632834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.632854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.057 [2024-09-28 01:38:45.632873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.632892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.057 [2024-09-28 01:38:45.632910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.632930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.057 [2024-09-28 01:38:45.632947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.632982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.057 [2024-09-28 01:38:45.632999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.633018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.057 [2024-09-28 01:38:45.633036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.633055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.057 [2024-09-28 01:38:45.633072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.633092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.057 [2024-09-28 01:38:45.633109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.633128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.057 [2024-09-28 01:38:45.633145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.633164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.057 [2024-09-28 01:38:45.633192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.633214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.057 [2024-09-28 01:38:45.633232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.633251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.057 [2024-09-28 01:38:45.633268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.633287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.057 [2024-09-28 01:38:45.633304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.633328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.057 [2024-09-28 01:38:45.633347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.633366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bc80 is same with the state(6) to be set 00:26:11.057 [2024-09-28 01:38:45.633390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:11.057 [2024-09-28 01:38:45.633405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:11.057 [2024-09-28 01:38:45.633421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50984 len:8 PRP1 0x0 PRP2 0x0 00:26:11.057 [2024-09-28 01:38:45.633438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.633457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:11.057 [2024-09-28 01:38:45.633470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:11.057 [2024-09-28 01:38:45.633483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51440 len:8 PRP1 0x0 PRP2 0x0 00:26:11.057 [2024-09-28 01:38:45.633499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.633535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:11.057 [2024-09-28 01:38:45.633549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:11.057 [2024-09-28 01:38:45.633562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51448 len:8 PRP1 0x0 PRP2 0x0 00:26:11.057 [2024-09-28 01:38:45.633578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.633594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:11.057 [2024-09-28 01:38:45.633607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:11.057 [2024-09-28 01:38:45.633620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51456 len:8 PRP1 0x0 PRP2 0x0 00:26:11.057 [2024-09-28 01:38:45.633636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.633652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:11.057 [2024-09-28 01:38:45.633665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:11.057 [2024-09-28 01:38:45.633678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51464 len:8 PRP1 0x0 PRP2 0x0 00:26:11.057 [2024-09-28 01:38:45.633703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.633721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:11.057 [2024-09-28 01:38:45.633734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:11.057 [2024-09-28 01:38:45.633747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51472 len:8 PRP1 0x0 PRP2 0x0 00:26:11.057 [2024-09-28 01:38:45.633763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.633778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:11.057 [2024-09-28 01:38:45.633791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:11.057 [2024-09-28 01:38:45.633804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51480 len:8 PRP1 0x0 PRP2 0x0 00:26:11.057 [2024-09-28 01:38:45.633820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.057 [2024-09-28 01:38:45.633836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:11.057 [2024-09-28 01:38:45.633851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:11.057 [2024-09-28 01:38:45.633865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51488 len:8 PRP1 0x0 PRP2 0x0 00:26:11.058 [2024-09-28 01:38:45.633881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.058 [2024-09-28 01:38:45.633897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:11.058 [2024-09-28 01:38:45.633910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:11.058 [2024-09-28 01:38:45.633923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51496 len:8 PRP1 0x0 PRP2 0x0 00:26:11.058 [2024-09-28 01:38:45.633939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.058 [2024-09-28 01:38:45.634193] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002bc80 was disconnected and freed. reset controller. 00:26:11.058 [2024-09-28 01:38:45.634349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.058 [2024-09-28 01:38:45.634380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.058 [2024-09-28 01:38:45.634402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.058 [2024-09-28 01:38:45.634423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.058 [2024-09-28 01:38:45.634455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.058 [2024-09-28 01:38:45.634478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.058 [2024-09-28 01:38:45.634495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.058 [2024-09-28 01:38:45.634512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.058 [2024-09-28 01:38:45.634531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.058 [2024-09-28 01:38:45.634548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.058 [2024-09-28 01:38:45.634586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:26:11.058 [2024-09-28 01:38:45.635914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:11.058 [2024-09-28 01:38:45.636013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:26:11.058 [2024-09-28 01:38:45.636522] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.058 [2024-09-28 01:38:45.636566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.3, port=4421 00:26:11.058 [2024-09-28 01:38:45.636598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:26:11.058 [2024-09-28 01:38:45.636682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:26:11.058 [2024-09-28 01:38:45.636730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:11.058 [2024-09-28 01:38:45.636754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:11.058 [2024-09-28 01:38:45.636773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:11.058 [2024-09-28 01:38:45.636829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:11.058 [2024-09-28 01:38:45.636855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:11.058 6234.92 IOPS, 24.36 MiB/s 6266.19 IOPS, 24.48 MiB/s 6303.39 IOPS, 24.62 MiB/s 6341.77 IOPS, 24.77 MiB/s 6375.82 IOPS, 24.91 MiB/s 6408.51 IOPS, 25.03 MiB/s 6439.83 IOPS, 25.16 MiB/s 6466.44 IOPS, 25.26 MiB/s 6493.84 IOPS, 25.37 MiB/s 6520.73 IOPS, 25.47 MiB/s [2024-09-28 01:38:55.700624] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:11.058 6547.83 IOPS, 25.58 MiB/s 6576.68 IOPS, 25.69 MiB/s 6605.17 IOPS, 25.80 MiB/s 6631.84 IOPS, 25.91 MiB/s 6648.16 IOPS, 25.97 MiB/s 6665.73 IOPS, 26.04 MiB/s 6682.00 IOPS, 26.10 MiB/s 6696.30 IOPS, 26.16 MiB/s 6714.52 IOPS, 26.23 MiB/s 6733.93 IOPS, 26.30 MiB/s Received shutdown signal, test time was about 55.528677 seconds 00:26:11.058 00:26:11.058 Latency(us) 00:26:11.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.058 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:11.058 Verification LBA range: start 0x0 length 0x4000 00:26:11.058 Nvme0n1 : 55.53 6742.27 26.34 0.00 0.00 18960.72 726.11 7046430.72 00:26:11.058 =================================================================================================================== 00:26:11.058 Total : 6742.27 26.34 0.00 0.00 18960.72 726.11 7046430.72 00:26:11.058 [2024-09-28 01:39:05.832668] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:26:11.058 01:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:11.317 rmmod nvme_tcp 00:26:11.317 rmmod nvme_fabrics 00:26:11.317 rmmod nvme_keyring 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@513 -- # '[' -n 86900 ']' 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # killprocess 86900 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 86900 ']' 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 86900 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86900 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:11.317 killing process with pid 86900 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86900' 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 86900 00:26:11.317 01:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 86900 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-save 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:26:12.699 00:26:12.699 real 1m3.330s 00:26:12.699 user 2m54.128s 00:26:12.699 sys 0m17.842s 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:12.699 ************************************ 00:26:12.699 END TEST nvmf_host_multipath 00:26:12.699 ************************************ 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.699 ************************************ 00:26:12.699 START TEST nvmf_timeout 00:26:12.699 ************************************ 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:12.699 * Looking for test storage... 00:26:12.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:26:12.699 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:12.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.960 --rc genhtml_branch_coverage=1 00:26:12.960 --rc genhtml_function_coverage=1 00:26:12.960 --rc genhtml_legend=1 00:26:12.960 --rc geninfo_all_blocks=1 00:26:12.960 --rc geninfo_unexecuted_blocks=1 00:26:12.960 00:26:12.960 ' 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:12.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.960 --rc genhtml_branch_coverage=1 00:26:12.960 --rc genhtml_function_coverage=1 00:26:12.960 --rc genhtml_legend=1 00:26:12.960 --rc geninfo_all_blocks=1 00:26:12.960 --rc geninfo_unexecuted_blocks=1 00:26:12.960 00:26:12.960 ' 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:12.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.960 --rc genhtml_branch_coverage=1 00:26:12.960 --rc genhtml_function_coverage=1 00:26:12.960 --rc genhtml_legend=1 00:26:12.960 --rc geninfo_all_blocks=1 00:26:12.960 --rc geninfo_unexecuted_blocks=1 00:26:12.960 00:26:12.960 ' 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:12.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.960 --rc genhtml_branch_coverage=1 00:26:12.960 --rc genhtml_function_coverage=1 00:26:12.960 --rc genhtml_legend=1 00:26:12.960 --rc geninfo_all_blocks=1 00:26:12.960 --rc geninfo_unexecuted_blocks=1 00:26:12.960 00:26:12.960 ' 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:12.960 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:12.960 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:12.961 Cannot find device "nvmf_init_br" 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:12.961 Cannot find device "nvmf_init_br2" 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:12.961 Cannot find device "nvmf_tgt_br" 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:12.961 Cannot find device "nvmf_tgt_br2" 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:12.961 Cannot find device "nvmf_init_br" 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:12.961 Cannot find device "nvmf_init_br2" 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:12.961 Cannot find device "nvmf_tgt_br" 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:12.961 Cannot find device "nvmf_tgt_br2" 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:12.961 Cannot find device "nvmf_br" 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:12.961 Cannot find device "nvmf_init_if" 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:12.961 Cannot find device "nvmf_init_if2" 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:12.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:12.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:26:12.961 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:13.220 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:13.220 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:13.220 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:13.220 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:13.220 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:13.220 01:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:13.220 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:13.220 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:13.220 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:13.220 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:13.220 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:13.220 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:13.220 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:13.220 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:13.220 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:13.220 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:13.220 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:13.220 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:13.220 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:13.221 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:13.221 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:13.221 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:13.221 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:13.221 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:13.221 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:13.221 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:13.221 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:13.221 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:13.221 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:13.221 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:13.221 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:13.480 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:13.480 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:26:13.480 00:26:13.480 --- 10.0.0.3 ping statistics --- 00:26:13.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.480 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:13.480 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:13.480 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:26:13.480 00:26:13.480 --- 10.0.0.4 ping statistics --- 00:26:13.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.480 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:13.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:26:13.480 00:26:13.480 --- 10.0.0.1 ping statistics --- 00:26:13.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.480 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:13.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:26:13.480 00:26:13.480 --- 10.0.0.2 ping statistics --- 00:26:13.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.480 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # return 0 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # nvmfpid=88124 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # waitforlisten 88124 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 88124 ']' 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:13.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:13.480 01:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.480 [2024-09-28 01:39:09.325497] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:26:13.480 [2024-09-28 01:39:09.325656] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.740 [2024-09-28 01:39:09.497574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:13.740 [2024-09-28 01:39:09.655532] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.740 [2024-09-28 01:39:09.655606] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.740 [2024-09-28 01:39:09.655631] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.740 [2024-09-28 01:39:09.655642] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.740 [2024-09-28 01:39:09.655654] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.740 [2024-09-28 01:39:09.655894] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.740 [2024-09-28 01:39:09.656183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.999 [2024-09-28 01:39:09.805239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:14.568 01:39:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:14.568 01:39:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:14.568 01:39:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:14.568 01:39:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:14.568 01:39:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.568 01:39:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.568 01:39:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:14.568 01:39:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:14.827 [2024-09-28 01:39:10.558075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.827 01:39:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:15.086 Malloc0 00:26:15.086 01:39:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:15.345 01:39:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:15.603 01:39:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:15.861 [2024-09-28 01:39:11.612949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:15.861 01:39:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=88173 00:26:15.861 01:39:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:15.861 01:39:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 88173 /var/tmp/bdevperf.sock 00:26:15.861 01:39:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 88173 ']' 00:26:15.861 01:39:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:15.861 01:39:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:15.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:15.861 01:39:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:15.861 01:39:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:15.861 01:39:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:15.861 [2024-09-28 01:39:11.719511] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:26:15.861 [2024-09-28 01:39:11.719688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88173 ] 00:26:16.119 [2024-09-28 01:39:11.877855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.119 [2024-09-28 01:39:12.033790] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.378 [2024-09-28 01:39:12.184877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:16.946 01:39:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:16.946 01:39:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:16.946 01:39:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:17.204 01:39:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:17.463 NVMe0n1 00:26:17.463 01:39:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:17.463 01:39:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=88197 00:26:17.463 01:39:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:26:17.463 Running I/O for 10 seconds... 00:26:18.400 01:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:18.662 6549.00 IOPS, 25.58 MiB/s [2024-09-28 01:39:14.497606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.497681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.497717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.497733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.497750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.497764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.497780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.497793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.497812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.497825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.497841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.497853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.497869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.497881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.497896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.497908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.497924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.497936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.497951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.497964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.497979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.497991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.498007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.498019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.498037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.498050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.498072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.498085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.498100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.498112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.498127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.498139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.498155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.498167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.498183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.662 [2024-09-28 01:39:14.498195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.662 [2024-09-28 01:39:14.498212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.663 [2024-09-28 01:39:14.498225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.663 [2024-09-28 01:39:14.498252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.663 [2024-09-28 01:39:14.498282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.663 [2024-09-28 01:39:14.498309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.663 [2024-09-28 01:39:14.498336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.663 [2024-09-28 01:39:14.498365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.663 [2024-09-28 01:39:14.498392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.663 [2024-09-28 01:39:14.498420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.663 [2024-09-28 01:39:14.498447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.663 [2024-09-28 01:39:14.498515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.663 [2024-09-28 01:39:14.498546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.663 [2024-09-28 01:39:14.498575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.663 [2024-09-28 01:39:14.498619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.498651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.498698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.498728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.498757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.498786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.498816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.498845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.498873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.498918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.498974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.498992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.499005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.499022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.499620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.499665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.499681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.499703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.499717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.499750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.499763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.499795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.663 [2024-09-28 01:39:14.499808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.499824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.663 [2024-09-28 01:39:14.499852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.499867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.499880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.499895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.499907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.499923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.499935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.499951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.499963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.499980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.499993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.500008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.500020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.500038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.500068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.500084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.663 [2024-09-28 01:39:14.500098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.500114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.500126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.663 [2024-09-28 01:39:14.500144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.663 [2024-09-28 01:39:14.500157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.500978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.500997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.501010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.501026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.501039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.501055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.501067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.501083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.501095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.501111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.501124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.501140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.501152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.501168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.501180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.501196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.501208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.501226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.501239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.501256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.501269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.501285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.501297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.501313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.501325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.501341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.501353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.664 [2024-09-28 01:39:14.501372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.664 [2024-09-28 01:39:14.501385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.501986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.501999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.502018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.502031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.502046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.502059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.502075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.502087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.502102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.502115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.502131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.502143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.502161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.502174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.502192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.502204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.502220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.502233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.502248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.665 [2024-09-28 01:39:14.502261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.502276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:26:18.665 [2024-09-28 01:39:14.502294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:18.665 [2024-09-28 01:39:14.502312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:18.665 [2024-09-28 01:39:14.502324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59280 len:8 PRP1 0x0 PRP2 0x0 00:26:18.665 [2024-09-28 01:39:14.502338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.665 [2024-09-28 01:39:14.502575] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b280 was disconnected and freed. reset controller. 00:26:18.665 [2024-09-28 01:39:14.503682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.665 [2024-09-28 01:39:14.504228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:18.665 [2024-09-28 01:39:14.504849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-09-28 01:39:14.505083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:18.665 [2024-09-28 01:39:14.505575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:18.665 [2024-09-28 01:39:14.506034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:18.665 [2024-09-28 01:39:14.506501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.665 [2024-09-28 01:39:14.506963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.665 [2024-09-28 01:39:14.507419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.665 [2024-09-28 01:39:14.507667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.665 [2024-09-28 01:39:14.507885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.665 01:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:26:20.795 3658.50 IOPS, 14.29 MiB/s 2439.00 IOPS, 9.53 MiB/s [2024-09-28 01:39:16.508425] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.795 [2024-09-28 01:39:16.508796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:20.795 [2024-09-28 01:39:16.509259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:20.795 [2024-09-28 01:39:16.509716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:20.795 [2024-09-28 01:39:16.510003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.795 [2024-09-28 01:39:16.510033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.795 [2024-09-28 01:39:16.510050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.795 [2024-09-28 01:39:16.510094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.795 [2024-09-28 01:39:16.510111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.795 01:39:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:26:20.795 01:39:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:20.795 01:39:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:21.054 01:39:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:26:21.054 01:39:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:26:21.054 01:39:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:21.054 01:39:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:21.311 01:39:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:26:21.311 01:39:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:26:22.768 1829.25 IOPS, 7.15 MiB/s 1463.40 IOPS, 5.72 MiB/s [2024-09-28 01:39:18.510254] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-09-28 01:39:18.510352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:22.768 [2024-09-28 01:39:18.510375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:22.768 [2024-09-28 01:39:18.510410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:22.768 [2024-09-28 01:39:18.510437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.768 [2024-09-28 01:39:18.510454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.768 [2024-09-28 01:39:18.510504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.768 [2024-09-28 01:39:18.510549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.768 [2024-09-28 01:39:18.510567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.638 1219.50 IOPS, 4.76 MiB/s 1045.29 IOPS, 4.08 MiB/s [2024-09-28 01:39:20.510642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.638 [2024-09-28 01:39:20.510694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.638 [2024-09-28 01:39:20.510713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.638 [2024-09-28 01:39:20.510727] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:24.639 [2024-09-28 01:39:20.510772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.836 914.62 IOPS, 3.57 MiB/s 00:26:25.836 Latency(us) 00:26:25.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.836 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:25.836 Verification LBA range: start 0x0 length 0x4000 00:26:25.836 NVMe0n1 : 8.13 900.27 3.52 15.75 0.00 139473.19 3991.74 7015926.69 00:26:25.836 =================================================================================================================== 00:26:25.836 Total : 900.27 3.52 15.75 0.00 139473.19 3991.74 7015926.69 00:26:25.836 { 00:26:25.836 "results": [ 00:26:25.836 { 00:26:25.836 "job": "NVMe0n1", 00:26:25.836 "core_mask": "0x4", 00:26:25.836 "workload": "verify", 00:26:25.836 "status": "finished", 00:26:25.836 "verify_range": { 00:26:25.836 "start": 0, 00:26:25.836 "length": 16384 00:26:25.836 }, 00:26:25.836 "queue_depth": 128, 00:26:25.836 "io_size": 4096, 00:26:25.836 "runtime": 8.127556, 00:26:25.836 "iops": 900.2706348624359, 00:26:25.836 "mibps": 3.5166821674313904, 00:26:25.836 "io_failed": 128, 00:26:25.836 "io_timeout": 0, 00:26:25.836 "avg_latency_us": 139473.19384089383, 00:26:25.836 "min_latency_us": 3991.7381818181816, 00:26:25.836 "max_latency_us": 7015926.69090909 00:26:25.836 } 00:26:25.836 ], 00:26:25.836 "core_count": 1 00:26:25.836 } 00:26:26.437 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:26:26.437 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:26.437 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:26.437 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:26:26.437 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:26:26.437 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:26.437 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:26.696 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:26:26.696 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 88197 00:26:26.696 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 88173 00:26:26.696 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 88173 ']' 00:26:26.696 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 88173 00:26:26.696 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:26:26.696 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:26.696 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88173 00:26:26.696 killing process with pid 88173 00:26:26.696 Received shutdown signal, test time was about 9.223326 seconds 00:26:26.696 00:26:26.696 Latency(us) 00:26:26.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.696 =================================================================================================================== 00:26:26.696 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:26.696 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:26.696 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:26.696 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88173' 00:26:26.696 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 88173 00:26:26.696 01:39:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 88173 00:26:28.074 01:39:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:28.074 [2024-09-28 01:39:23.840078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:28.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:28.074 01:39:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=88324 00:26:28.074 01:39:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:28.074 01:39:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 88324 /var/tmp/bdevperf.sock 00:26:28.074 01:39:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 88324 ']' 00:26:28.074 01:39:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:28.074 01:39:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:28.074 01:39:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:28.074 01:39:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:28.074 01:39:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.074 [2024-09-28 01:39:23.968106] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:26:28.074 [2024-09-28 01:39:23.969036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88324 ] 00:26:28.333 [2024-09-28 01:39:24.139233] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.592 [2024-09-28 01:39:24.297754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.592 [2024-09-28 01:39:24.459857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:29.160 01:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:29.160 01:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:29.160 01:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:29.160 01:39:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:26:29.728 NVMe0n1 00:26:29.728 01:39:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=88349 00:26:29.728 01:39:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:29.728 01:39:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:26:29.728 Running I/O for 10 seconds... 00:26:30.664 01:39:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:30.927 6449.00 IOPS, 25.19 MiB/s [2024-09-28 01:39:26.661365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.661483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.661525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.661543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.661562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.661576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.661594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.661609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.661626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.661640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.661657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.661671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.661691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.661721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.662185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.662207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.662227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.662242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.662261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.662275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.662293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.662541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.662570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.662806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.662872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.662889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.662910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.662924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.662958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.662993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.663164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.663314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.663415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.663434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.663740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.663774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.663799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.663815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.663850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.663866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.663900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.663929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.664233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.664516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.664550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.664568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.664587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.664602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.664947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.664977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.665001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.665016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.665035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.665050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.665072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.665087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.665454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.665477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.665498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.665514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.665536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.927 [2024-09-28 01:39:26.665551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.665678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.927 [2024-09-28 01:39:26.665813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.665984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.927 [2024-09-28 01:39:26.666107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.666135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.927 [2024-09-28 01:39:26.666151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.666170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.927 [2024-09-28 01:39:26.666185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.666204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.927 [2024-09-28 01:39:26.666218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.666342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.927 [2024-09-28 01:39:26.666363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.666622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.927 [2024-09-28 01:39:26.666657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.927 [2024-09-28 01:39:26.666683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.666698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.666718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.666732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.666753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.666768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.666891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.666912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.667171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.667205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.667336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.667360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.667615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.667650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.667689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.667706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.667728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.928 [2024-09-28 01:39:26.667744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.667995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.928 [2024-09-28 01:39:26.668025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.668049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.668186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.668313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.668340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.668771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.668891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.668920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.668936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.668955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.668970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.668991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.669107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.669362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.669396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.669421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.928 [2024-09-28 01:39:26.669437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.669477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.669493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.669511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.669526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.669654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.669674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.669925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.669955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.669978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.669993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.670012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.670027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.670047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.670062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.670179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.670202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.670457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.670488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.670512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.670528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.670549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.670564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.670583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.670886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.671043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.671063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.671084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.671100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.671121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.671136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.671155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.671409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.671438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.671538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.671563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.671579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.671955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.671989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.672012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.672027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.672046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.672060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.672079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.928 [2024-09-28 01:39:26.672094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.928 [2024-09-28 01:39:26.672217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.672240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.672500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.672534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.672562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.672578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.672598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.672612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.672631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.672645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.672774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.672792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.673049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.673070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.673090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.673105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.673126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.673141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.673415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.673542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.673573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.673590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.673717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.673864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.674140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.674278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.674423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.674671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.674708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.674997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.675042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.675061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.675084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.675224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.675357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.675376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.675396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.675541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.675672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.675690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.675710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.675853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.676129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.676262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.676419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.676516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.676546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.676563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.676705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.676822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.676851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.676985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.677111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.677138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.677286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.677663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.677716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.677736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.677994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.678030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.678054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.678068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.678204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 01:39:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:26:30.929 [2024-09-28 01:39:26.678506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.678555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.678574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.678594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.678704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.678833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.678865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.678890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.678905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.678925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.678939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.678987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.679004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.679023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.679288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.929 [2024-09-28 01:39:26.679320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-09-28 01:39:26.679589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.930 [2024-09-28 01:39:26.679627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.930 [2024-09-28 01:39:26.679884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.930 [2024-09-28 01:39:26.679915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.930 [2024-09-28 01:39:26.679932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.930 [2024-09-28 01:39:26.680292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.930 [2024-09-28 01:39:26.680325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.930 [2024-09-28 01:39:26.680350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.930 [2024-09-28 01:39:26.680365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.930 [2024-09-28 01:39:26.680620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.930 [2024-09-28 01:39:26.680649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.930 [2024-09-28 01:39:26.680672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.930 [2024-09-28 01:39:26.680688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.930 [2024-09-28 01:39:26.680936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.930 [2024-09-28 01:39:26.680969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.930 [2024-09-28 01:39:26.680994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.930 [2024-09-28 01:39:26.681010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.930 [2024-09-28 01:39:26.681267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.930 [2024-09-28 01:39:26.681367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.930 [2024-09-28 01:39:26.681398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:26:30.930 [2024-09-28 01:39:26.681421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:30.930 [2024-09-28 01:39:26.681654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:30.930 [2024-09-28 01:39:26.681674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59504 len:8 PRP1 0x0 PRP2 0x0 00:26:30.930 [2024-09-28 01:39:26.681692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.930 [2024-09-28 01:39:26.682132] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:26:30.930 [2024-09-28 01:39:26.682521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.930 [2024-09-28 01:39:26.682600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.930 [2024-09-28 01:39:26.682638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.930 [2024-09-28 01:39:26.682918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.930 [2024-09-28 01:39:26.682976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.930 [2024-09-28 01:39:26.682997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.930 [2024-09-28 01:39:26.683014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.930 [2024-09-28 01:39:26.683033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.930 [2024-09-28 01:39:26.683317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:30.930 [2024-09-28 01:39:26.683822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.930 [2024-09-28 01:39:26.683941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:30.930 [2024-09-28 01:39:26.684159] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.930 [2024-09-28 01:39:26.684297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:30.930 [2024-09-28 01:39:26.684320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:30.930 [2024-09-28 01:39:26.684585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:30.930 [2024-09-28 01:39:26.684654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.930 [2024-09-28 01:39:26.684682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.930 [2024-09-28 01:39:26.684816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.930 [2024-09-28 01:39:26.685001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.930 [2024-09-28 01:39:26.685024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.867 3672.50 IOPS, 14.35 MiB/s 01:39:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:31.867 [2024-09-28 01:39:27.685426] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.867 [2024-09-28 01:39:27.685534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:31.867 [2024-09-28 01:39:27.685557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:31.867 [2024-09-28 01:39:27.685594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:31.867 [2024-09-28 01:39:27.685621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.867 [2024-09-28 01:39:27.685639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.867 [2024-09-28 01:39:27.685654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.867 [2024-09-28 01:39:27.685693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.867 [2024-09-28 01:39:27.685711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.127 [2024-09-28 01:39:27.896259] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:32.127 01:39:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 88349 00:26:32.955 2448.33 IOPS, 9.56 MiB/s [2024-09-28 01:39:28.702473] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:39.955 1836.25 IOPS, 7.17 MiB/s 2862.80 IOPS, 11.18 MiB/s 3819.00 IOPS, 14.92 MiB/s 4506.29 IOPS, 17.60 MiB/s 5021.25 IOPS, 19.61 MiB/s 5430.44 IOPS, 21.21 MiB/s 5752.80 IOPS, 22.47 MiB/s 00:26:39.955 Latency(us) 00:26:39.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.955 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:39.955 Verification LBA range: start 0x0 length 0x4000 00:26:39.955 NVMe0n1 : 10.01 5758.47 22.49 0.00 0.00 22193.69 1437.32 3050402.91 00:26:39.955 =================================================================================================================== 00:26:39.956 Total : 5758.47 22.49 0.00 0.00 22193.69 1437.32 3050402.91 00:26:39.956 { 00:26:39.956 "results": [ 00:26:39.956 { 00:26:39.956 "job": "NVMe0n1", 00:26:39.956 "core_mask": "0x4", 00:26:39.956 "workload": "verify", 00:26:39.956 "status": "finished", 00:26:39.956 "verify_range": { 00:26:39.956 "start": 0, 00:26:39.956 "length": 16384 00:26:39.956 }, 00:26:39.956 "queue_depth": 128, 00:26:39.956 "io_size": 4096, 00:26:39.956 "runtime": 10.009261, 00:26:39.956 "iops": 5758.467083633846, 00:26:39.956 "mibps": 22.494012045444713, 00:26:39.956 "io_failed": 0, 00:26:39.956 "io_timeout": 0, 00:26:39.956 "avg_latency_us": 22193.69267926147, 00:26:39.956 "min_latency_us": 1437.3236363636363, 00:26:39.956 "max_latency_us": 3050402.909090909 00:26:39.956 } 00:26:39.956 ], 00:26:39.956 "core_count": 1 00:26:39.956 } 00:26:39.956 01:39:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=88455 00:26:39.956 01:39:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:39.956 01:39:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:26:39.956 Running I/O for 10 seconds... 00:26:40.894 01:39:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:40.894 6428.00 IOPS, 25.11 MiB/s [2024-09-28 01:39:36.769004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.894 [2024-09-28 01:39:36.769422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.769995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.895 [2024-09-28 01:39:36.770561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.896 [2024-09-28 01:39:36.770573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:40.896 [2024-09-28 01:39:36.771010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.771078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.771117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.771134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.771152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.771169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.771186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.771201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.771606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.771627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.771646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.771675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.771691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.771704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.771720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.772139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.772178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.772195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.772212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.772226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.772242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.772257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.772288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.772705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.772757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.772774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.772792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.772808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.772839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.772854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.772870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.772885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.772902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.772931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.772962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.772975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.772991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.896 [2024-09-28 01:39:36.773660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:55952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.896 [2024-09-28 01:39:36.773674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.773689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.773702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.773717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.773730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.773745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.773759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.773774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.773787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.773818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.773831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.773847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.773860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.773875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:56032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.774973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.774990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.775006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.775019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.775035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.775049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.775064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.775077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.775093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.775109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.775125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.775138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.775154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.775167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.775183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.775196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.775212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.775225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.775240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.775284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.775298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.775310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.775339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.775351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.897 [2024-09-28 01:39:36.775371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.897 [2024-09-28 01:39:36.775385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.775399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.775411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.775425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.775438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.775452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.775464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.775477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.775489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.776027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.776512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.776968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.777329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.777789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.778247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.778698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.778912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.898 [2024-09-28 01:39:36.779728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.898 [2024-09-28 01:39:36.779757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.898 [2024-09-28 01:39:36.779786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.898 [2024-09-28 01:39:36.779814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.898 [2024-09-28 01:39:36.779840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.898 [2024-09-28 01:39:36.779867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.898 [2024-09-28 01:39:36.779893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.898 [2024-09-28 01:39:36.779920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.898 [2024-09-28 01:39:36.779946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.898 [2024-09-28 01:39:36.779961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.898 [2024-09-28 01:39:36.779974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.899 [2024-09-28 01:39:36.779988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.899 [2024-09-28 01:39:36.780001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.899 [2024-09-28 01:39:36.780015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.899 [2024-09-28 01:39:36.780028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.899 [2024-09-28 01:39:36.780042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.899 [2024-09-28 01:39:36.780054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.899 [2024-09-28 01:39:36.780069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.899 [2024-09-28 01:39:36.780081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.899 [2024-09-28 01:39:36.780095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.899 [2024-09-28 01:39:36.780108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.899 [2024-09-28 01:39:36.780122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.899 [2024-09-28 01:39:36.780135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.899 [2024-09-28 01:39:36.780149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.899 [2024-09-28 01:39:36.780161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.899 [2024-09-28 01:39:36.780175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bc80 is same with the state(6) to be set 00:26:40.899 [2024-09-28 01:39:36.780192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:40.899 [2024-09-28 01:39:36.780204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:40.899 [2024-09-28 01:39:36.780220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56536 len:8 PRP1 0x0 PRP2 0x0 00:26:40.899 [2024-09-28 01:39:36.780234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.899 [2024-09-28 01:39:36.780486] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002bc80 was disconnected and freed. reset controller. 00:26:40.899 [2024-09-28 01:39:36.780604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.899 [2024-09-28 01:39:36.780626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.899 [2024-09-28 01:39:36.780642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.899 [2024-09-28 01:39:36.780654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.899 [2024-09-28 01:39:36.780667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.899 [2024-09-28 01:39:36.780680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.899 [2024-09-28 01:39:36.780692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.899 [2024-09-28 01:39:36.780704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.899 [2024-09-28 01:39:36.780716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:40.899 [2024-09-28 01:39:36.780933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.899 [2024-09-28 01:39:36.780964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:40.899 [2024-09-28 01:39:36.781080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.899 [2024-09-28 01:39:36.781109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:40.899 [2024-09-28 01:39:36.781128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:40.899 [2024-09-28 01:39:36.781154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:40.899 [2024-09-28 01:39:36.781177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.899 [2024-09-28 01:39:36.781191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.899 [2024-09-28 01:39:36.781204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.899 [2024-09-28 01:39:36.781236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.899 [2024-09-28 01:39:36.781252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.899 01:39:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:26:42.092 3477.50 IOPS, 13.58 MiB/s [2024-09-28 01:39:37.791174] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.092 [2024-09-28 01:39:37.791600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:42.092 [2024-09-28 01:39:37.791635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:42.092 [2024-09-28 01:39:37.791673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:42.092 [2024-09-28 01:39:37.791732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.092 [2024-09-28 01:39:37.791749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.092 [2024-09-28 01:39:37.791764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.092 [2024-09-28 01:39:37.791802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.092 [2024-09-28 01:39:37.791820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.030 2318.33 IOPS, 9.06 MiB/s [2024-09-28 01:39:38.791964] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.030 [2024-09-28 01:39:38.792038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:43.030 [2024-09-28 01:39:38.792061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:43.030 [2024-09-28 01:39:38.792091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:43.030 [2024-09-28 01:39:38.792117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.030 [2024-09-28 01:39:38.792131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.030 [2024-09-28 01:39:38.792144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.030 [2024-09-28 01:39:38.792179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.030 [2024-09-28 01:39:38.792196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.965 1738.75 IOPS, 6.79 MiB/s [2024-09-28 01:39:39.793180] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-09-28 01:39:39.793257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:43.966 [2024-09-28 01:39:39.793280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:43.966 [2024-09-28 01:39:39.793614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:43.966 [2024-09-28 01:39:39.794036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.966 [2024-09-28 01:39:39.794070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.966 [2024-09-28 01:39:39.794087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.966 [2024-09-28 01:39:39.798071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.966 [2024-09-28 01:39:39.798129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.966 01:39:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:44.224 [2024-09-28 01:39:40.069808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:44.224 01:39:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 88455 00:26:45.049 1391.00 IOPS, 5.43 MiB/s [2024-09-28 01:39:40.833943] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:49.762 2401.67 IOPS, 9.38 MiB/s 3312.29 IOPS, 12.94 MiB/s 3997.00 IOPS, 15.61 MiB/s 4542.44 IOPS, 17.74 MiB/s 4957.80 IOPS, 19.37 MiB/s 00:26:49.762 Latency(us) 00:26:49.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.762 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:49.762 Verification LBA range: start 0x0 length 0x4000 00:26:49.762 NVMe0n1 : 10.01 4963.72 19.39 4022.47 0.00 14217.21 778.24 3035150.89 00:26:49.762 =================================================================================================================== 00:26:49.762 Total : 4963.72 19.39 4022.47 0.00 14217.21 0.00 3035150.89 00:26:49.762 { 00:26:49.762 "results": [ 00:26:49.762 { 00:26:49.762 "job": "NVMe0n1", 00:26:49.762 "core_mask": "0x4", 00:26:49.762 "workload": "verify", 00:26:49.762 "status": "finished", 00:26:49.762 "verify_range": { 00:26:49.762 "start": 0, 00:26:49.762 "length": 16384 00:26:49.762 }, 00:26:49.762 "queue_depth": 128, 00:26:49.762 "io_size": 4096, 00:26:49.762 "runtime": 10.009022, 00:26:49.762 "iops": 4963.721730254964, 00:26:49.762 "mibps": 19.389538008808454, 00:26:49.762 "io_failed": 40261, 00:26:49.762 "io_timeout": 0, 00:26:49.762 "avg_latency_us": 14217.207065889204, 00:26:49.762 "min_latency_us": 778.24, 00:26:49.762 "max_latency_us": 3035150.8945454545 00:26:49.762 } 00:26:49.762 ], 00:26:49.762 "core_count": 1 00:26:49.762 } 00:26:50.021 01:39:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 88324 00:26:50.021 01:39:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 88324 ']' 00:26:50.021 01:39:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 88324 00:26:50.021 01:39:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:26:50.021 01:39:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:50.021 01:39:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88324 00:26:50.021 killing process with pid 88324 00:26:50.021 Received shutdown signal, test time was about 10.000000 seconds 00:26:50.021 00:26:50.021 Latency(us) 00:26:50.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.021 =================================================================================================================== 00:26:50.021 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:50.021 01:39:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:50.021 01:39:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:50.021 01:39:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88324' 00:26:50.021 01:39:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 88324 00:26:50.021 01:39:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 88324 00:26:50.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:50.958 01:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=88576 00:26:50.958 01:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:26:50.958 01:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 88576 /var/tmp/bdevperf.sock 00:26:50.958 01:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 88576 ']' 00:26:50.958 01:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:50.958 01:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:50.958 01:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:50.958 01:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:50.958 01:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.958 [2024-09-28 01:39:46.849302] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:26:50.958 [2024-09-28 01:39:46.849785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88576 ] 00:26:51.218 [2024-09-28 01:39:47.021322] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.477 [2024-09-28 01:39:47.179184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.477 [2024-09-28 01:39:47.337963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:52.045 01:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:52.045 01:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:52.045 01:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88576 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:26:52.045 01:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=88592 00:26:52.045 01:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:26:52.304 01:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:52.564 NVMe0n1 00:26:52.564 01:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=88628 00:26:52.564 01:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:52.564 01:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:26:52.564 Running I/O for 10 seconds... 00:26:53.501 01:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:53.765 13716.00 IOPS, 53.58 MiB/s [2024-09-28 01:39:49.604930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.765 [2024-09-28 01:39:49.605688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.605990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:53.766 [2024-09-28 01:39:49.606918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-09-28 01:39:49.606974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.766 [2024-09-28 01:39:49.607014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-09-28 01:39:49.607031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.607049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.607062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.607079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.607092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.607108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.607121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.607138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.607150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.607167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.607180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.607199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.607212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.607246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.607259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.607275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.607288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.607304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.607316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.607332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.607345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.607361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.607373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.607390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.607403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.607419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.607432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.607448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.607726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.608202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.608592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.608878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.608903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.608923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.608937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.608954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.608981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.608999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.609012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.609027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.609039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.609055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.609068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.609083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.609095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.609113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.609125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.609142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.609154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.609169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.609181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.609197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.609208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.609224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.609236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.609255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.609269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.609284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.609296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.609312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.609324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.609341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.609354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.609372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.609384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.609400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.609412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.609427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.609439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.767 [2024-09-28 01:39:49.609471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-09-28 01:39:49.609500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.609518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.609546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.609565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.609578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.609594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.609607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.609625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.609637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.609664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.609692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.609708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.609721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.609844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.609866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.609885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.609898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-09-28 01:39:49.610985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-09-28 01:39:49.610999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-09-28 01:39:49.611969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.769 [2024-09-28 01:39:49.611985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.611997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.612012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.612024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.612040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.612052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.612068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.612080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.612096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.612108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.612124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.612136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.612151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.612164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.612179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.612192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.612220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.612232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.612248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.612261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.612276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.612288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.612308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.612320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.612336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.612348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.612364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.612376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.612394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.612406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.612422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.612434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.613051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.613505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.613953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.614397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.614787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.615338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.615829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.616284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.616744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.617180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.617591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.618032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.618511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.618794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.618831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.618846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.618866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.618879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.618896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-09-28 01:39:49.618908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-09-28 01:39:49.618937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:26:53.770 [2024-09-28 01:39:49.618983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:53.770 [2024-09-28 01:39:49.619004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:53.771 [2024-09-28 01:39:49.619017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126400 len:8 PRP1 0x0 PRP2 0x0 00:26:53.771 [2024-09-28 01:39:49.619032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.771 [2024-09-28 01:39:49.619274] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b280 was disconnected and freed. reset controller. 00:26:53.771 [2024-09-28 01:39:49.619410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:53.771 [2024-09-28 01:39:49.619436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.771 [2024-09-28 01:39:49.619452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:53.771 [2024-09-28 01:39:49.619466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.771 [2024-09-28 01:39:49.619479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:53.771 [2024-09-28 01:39:49.619506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.771 [2024-09-28 01:39:49.619521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:53.771 [2024-09-28 01:39:49.619534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.771 [2024-09-28 01:39:49.619546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:53.771 [2024-09-28 01:39:49.619822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.771 [2024-09-28 01:39:49.619859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:53.771 [2024-09-28 01:39:49.620002] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-09-28 01:39:49.620037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:53.771 [2024-09-28 01:39:49.620053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:53.771 [2024-09-28 01:39:49.620082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:53.771 [2024-09-28 01:39:49.620105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.771 [2024-09-28 01:39:49.620124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.771 [2024-09-28 01:39:49.620138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.771 [2024-09-28 01:39:49.620172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.771 [2024-09-28 01:39:49.620188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.771 01:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 88628 00:26:55.903 7875.00 IOPS, 30.76 MiB/s 5250.00 IOPS, 20.51 MiB/s [2024-09-28 01:39:51.620358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.903 [2024-09-28 01:39:51.620742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:55.903 [2024-09-28 01:39:51.621200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:55.903 [2024-09-28 01:39:51.621659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:55.903 [2024-09-28 01:39:51.622144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.903 [2024-09-28 01:39:51.622559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.903 [2024-09-28 01:39:51.622990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.903 [2024-09-28 01:39:51.623255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.903 [2024-09-28 01:39:51.623530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.773 3937.50 IOPS, 15.38 MiB/s 3150.00 IOPS, 12.30 MiB/s [2024-09-28 01:39:53.624153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.773 [2024-09-28 01:39:53.624564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:57.773 [2024-09-28 01:39:53.624598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:57.773 [2024-09-28 01:39:53.624639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:57.773 [2024-09-28 01:39:53.624682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.773 [2024-09-28 01:39:53.624699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.773 [2024-09-28 01:39:53.624713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.773 [2024-09-28 01:39:53.624755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.773 [2024-09-28 01:39:53.624788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.904 2625.00 IOPS, 10.25 MiB/s 2250.00 IOPS, 8.79 MiB/s [2024-09-28 01:39:55.624913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.904 [2024-09-28 01:39:55.624965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.904 [2024-09-28 01:39:55.625004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.904 [2024-09-28 01:39:55.625018] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:59.904 [2024-09-28 01:39:55.625060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.838 1968.75 IOPS, 7.69 MiB/s 00:27:00.838 Latency(us) 00:27:00.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.838 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:27:00.838 NVMe0n1 : 8.18 1924.73 7.52 15.64 0.00 66010.00 8817.57 7046430.72 00:27:00.838 =================================================================================================================== 00:27:00.838 Total : 1924.73 7.52 15.64 0.00 66010.00 8817.57 7046430.72 00:27:00.838 { 00:27:00.838 "results": [ 00:27:00.838 { 00:27:00.838 "job": "NVMe0n1", 00:27:00.838 "core_mask": "0x4", 00:27:00.838 "workload": "randread", 00:27:00.838 "status": "finished", 00:27:00.838 "queue_depth": 128, 00:27:00.838 "io_size": 4096, 00:27:00.838 "runtime": 8.182971, 00:27:00.838 "iops": 1924.7288057112753, 00:27:00.839 "mibps": 7.518471897309669, 00:27:00.839 "io_failed": 128, 00:27:00.839 "io_timeout": 0, 00:27:00.839 "avg_latency_us": 66009.99512922396, 00:27:00.839 "min_latency_us": 8817.57090909091, 00:27:00.839 "max_latency_us": 7046430.72 00:27:00.839 } 00:27:00.839 ], 00:27:00.839 "core_count": 1 00:27:00.839 } 00:27:00.839 01:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:00.839 Attaching 5 probes... 00:27:00.839 1364.183916: reset bdev controller NVMe0 00:27:00.839 1364.299645: reconnect bdev controller NVMe0 00:27:00.839 3364.614342: reconnect delay bdev controller NVMe0 00:27:00.839 3364.649792: reconnect bdev controller NVMe0 00:27:00.839 5368.412559: reconnect delay bdev controller NVMe0 00:27:00.839 5368.446852: reconnect bdev controller NVMe0 00:27:00.839 7369.238180: reconnect delay bdev controller NVMe0 00:27:00.839 7369.290459: reconnect bdev controller NVMe0 00:27:00.839 01:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:27:00.839 01:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:27:00.839 01:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 88592 00:27:00.839 01:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:00.839 01:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 88576 00:27:00.839 01:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 88576 ']' 00:27:00.839 01:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 88576 00:27:00.839 01:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:27:00.839 01:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:00.839 01:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88576 00:27:00.839 01:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:27:00.839 01:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:27:00.839 killing process with pid 88576 00:27:00.839 01:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88576' 00:27:00.839 01:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 88576 00:27:00.839 Received shutdown signal, test time was about 8.251878 seconds 00:27:00.839 00:27:00.839 Latency(us) 00:27:00.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.839 =================================================================================================================== 00:27:00.839 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:00.839 01:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 88576 00:27:01.776 01:39:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:02.034 01:39:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:27:02.034 01:39:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:27:02.034 01:39:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:02.034 01:39:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:27:02.034 01:39:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:02.034 01:39:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:27:02.034 01:39:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:02.034 01:39:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:02.034 rmmod nvme_tcp 00:27:02.293 rmmod nvme_fabrics 00:27:02.293 rmmod nvme_keyring 00:27:02.293 01:39:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:02.293 01:39:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:27:02.293 01:39:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:27:02.293 01:39:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@513 -- # '[' -n 88124 ']' 00:27:02.293 01:39:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # killprocess 88124 00:27:02.293 01:39:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 88124 ']' 00:27:02.293 01:39:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 88124 00:27:02.293 01:39:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:27:02.293 01:39:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:02.293 01:39:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88124 00:27:02.293 killing process with pid 88124 00:27:02.293 01:39:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:02.293 01:39:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:02.293 01:39:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88124' 00:27:02.293 01:39:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 88124 00:27:02.293 01:39:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 88124 00:27:03.229 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:03.229 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:03.229 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:03.229 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:27:03.229 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:27:03.229 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:03.229 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-save 00:27:03.229 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:03.229 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:03.229 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:03.229 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:27:03.487 00:27:03.487 real 0m50.847s 00:27:03.487 user 2m27.032s 00:27:03.487 sys 0m5.652s 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:03.487 01:39:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.487 ************************************ 00:27:03.487 END TEST nvmf_timeout 00:27:03.487 ************************************ 00:27:03.746 01:39:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:27:03.746 01:39:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:03.746 ************************************ 00:27:03.746 END TEST nvmf_host 00:27:03.746 ************************************ 00:27:03.746 00:27:03.746 real 6m25.219s 00:27:03.746 user 17m44.691s 00:27:03.746 sys 1m17.384s 00:27:03.746 01:39:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:03.746 01:39:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.746 01:39:59 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:03.746 01:39:59 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:27:03.746 ************************************ 00:27:03.746 END TEST nvmf_tcp 00:27:03.746 ************************************ 00:27:03.746 00:27:03.746 real 17m8.100s 00:27:03.746 user 44m29.160s 00:27:03.746 sys 4m3.518s 00:27:03.746 01:39:59 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:03.746 01:39:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.746 01:39:59 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:27:03.746 01:39:59 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:03.746 01:39:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:03.746 01:39:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:03.746 01:39:59 -- common/autotest_common.sh@10 -- # set +x 00:27:03.746 ************************************ 00:27:03.746 START TEST nvmf_dif 00:27:03.746 ************************************ 00:27:03.746 01:39:59 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:03.746 * Looking for test storage... 00:27:03.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:03.746 01:39:59 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:03.746 01:39:59 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:27:03.746 01:39:59 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:04.006 01:39:59 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:27:04.006 01:39:59 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:04.006 01:39:59 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:04.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.006 --rc genhtml_branch_coverage=1 00:27:04.006 --rc genhtml_function_coverage=1 00:27:04.006 --rc genhtml_legend=1 00:27:04.006 --rc geninfo_all_blocks=1 00:27:04.006 --rc geninfo_unexecuted_blocks=1 00:27:04.006 00:27:04.006 ' 00:27:04.006 01:39:59 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:04.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.006 --rc genhtml_branch_coverage=1 00:27:04.006 --rc genhtml_function_coverage=1 00:27:04.006 --rc genhtml_legend=1 00:27:04.006 --rc geninfo_all_blocks=1 00:27:04.006 --rc geninfo_unexecuted_blocks=1 00:27:04.006 00:27:04.006 ' 00:27:04.006 01:39:59 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:04.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.006 --rc genhtml_branch_coverage=1 00:27:04.006 --rc genhtml_function_coverage=1 00:27:04.006 --rc genhtml_legend=1 00:27:04.006 --rc geninfo_all_blocks=1 00:27:04.006 --rc geninfo_unexecuted_blocks=1 00:27:04.006 00:27:04.006 ' 00:27:04.006 01:39:59 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:04.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.006 --rc genhtml_branch_coverage=1 00:27:04.006 --rc genhtml_function_coverage=1 00:27:04.006 --rc genhtml_legend=1 00:27:04.006 --rc geninfo_all_blocks=1 00:27:04.006 --rc geninfo_unexecuted_blocks=1 00:27:04.006 00:27:04.006 ' 00:27:04.006 01:39:59 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.006 01:39:59 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.006 01:39:59 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.006 01:39:59 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.006 01:39:59 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.006 01:39:59 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:04.006 01:39:59 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:04.006 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:04.006 01:39:59 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:04.006 01:39:59 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:04.006 01:39:59 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:04.006 01:39:59 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:04.006 01:39:59 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.006 01:39:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:04.006 01:39:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:27:04.006 01:39:59 nvmf_dif -- nvmf/common.sh@456 -- # nvmf_veth_init 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:04.007 Cannot find device "nvmf_init_br" 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@162 -- # true 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:04.007 Cannot find device "nvmf_init_br2" 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@163 -- # true 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:04.007 Cannot find device "nvmf_tgt_br" 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@164 -- # true 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:04.007 Cannot find device "nvmf_tgt_br2" 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@165 -- # true 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:04.007 Cannot find device "nvmf_init_br" 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@166 -- # true 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:04.007 Cannot find device "nvmf_init_br2" 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@167 -- # true 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:04.007 Cannot find device "nvmf_tgt_br" 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@168 -- # true 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:04.007 Cannot find device "nvmf_tgt_br2" 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@169 -- # true 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:04.007 Cannot find device "nvmf_br" 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@170 -- # true 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:04.007 Cannot find device "nvmf_init_if" 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@171 -- # true 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:04.007 Cannot find device "nvmf_init_if2" 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@172 -- # true 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:04.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@173 -- # true 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:04.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@174 -- # true 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:04.007 01:39:59 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:04.265 01:39:59 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:04.265 01:39:59 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:04.265 01:39:59 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:04.265 01:39:59 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:04.265 01:40:00 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:04.265 01:40:00 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:04.265 01:40:00 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:04.265 01:40:00 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:04.265 01:40:00 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:04.265 01:40:00 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:04.265 01:40:00 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:04.265 01:40:00 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:04.265 01:40:00 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:04.265 01:40:00 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:04.265 01:40:00 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:04.265 01:40:00 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:04.265 01:40:00 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:04.265 01:40:00 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:04.265 01:40:00 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:04.265 01:40:00 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:04.265 01:40:00 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:04.266 01:40:00 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:04.266 01:40:00 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:04.266 01:40:00 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:04.266 01:40:00 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:04.266 01:40:00 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:04.266 01:40:00 nvmf_dif -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:04.266 01:40:00 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:04.266 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:04.266 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:27:04.266 00:27:04.266 --- 10.0.0.3 ping statistics --- 00:27:04.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.266 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:27:04.266 01:40:00 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:04.266 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:04.266 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:27:04.266 00:27:04.266 --- 10.0.0.4 ping statistics --- 00:27:04.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.266 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:27:04.266 01:40:00 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:04.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:27:04.266 00:27:04.266 --- 10.0.0.1 ping statistics --- 00:27:04.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.266 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:27:04.266 01:40:00 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:04.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:27:04.266 00:27:04.266 --- 10.0.0.2 ping statistics --- 00:27:04.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.266 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:27:04.266 01:40:00 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.266 01:40:00 nvmf_dif -- nvmf/common.sh@457 -- # return 0 00:27:04.266 01:40:00 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:27:04.266 01:40:00 nvmf_dif -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:04.833 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:04.833 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:04.833 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:04.833 01:40:00 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.833 01:40:00 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:04.833 01:40:00 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:04.833 01:40:00 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.833 01:40:00 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:04.833 01:40:00 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:04.833 01:40:00 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:04.833 01:40:00 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:04.833 01:40:00 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:04.833 01:40:00 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:04.833 01:40:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:04.833 01:40:00 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=89135 00:27:04.833 01:40:00 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 89135 00:27:04.833 01:40:00 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:04.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.833 01:40:00 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 89135 ']' 00:27:04.833 01:40:00 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.833 01:40:00 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:04.833 01:40:00 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.833 01:40:00 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:04.833 01:40:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:04.833 [2024-09-28 01:40:00.718659] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:27:04.833 [2024-09-28 01:40:00.719689] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.091 [2024-09-28 01:40:00.901926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.350 [2024-09-28 01:40:01.132329] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.350 [2024-09-28 01:40:01.132711] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.350 [2024-09-28 01:40:01.132758] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.350 [2024-09-28 01:40:01.132784] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.350 [2024-09-28 01:40:01.132802] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.350 [2024-09-28 01:40:01.132858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.609 [2024-09-28 01:40:01.309938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:05.868 01:40:01 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:05.868 01:40:01 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:27:05.868 01:40:01 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:05.868 01:40:01 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:05.868 01:40:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:05.868 01:40:01 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.868 01:40:01 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:05.868 01:40:01 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:05.868 01:40:01 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.868 01:40:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:05.868 [2024-09-28 01:40:01.754686] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.868 01:40:01 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.868 01:40:01 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:05.868 01:40:01 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:05.868 01:40:01 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:05.868 01:40:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:05.868 ************************************ 00:27:05.868 START TEST fio_dif_1_default 00:27:05.868 ************************************ 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:05.868 bdev_null0 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.868 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:06.127 [2024-09-28 01:40:01.803742] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:06.127 { 00:27:06.127 "params": { 00:27:06.127 "name": "Nvme$subsystem", 00:27:06.127 "trtype": "$TEST_TRANSPORT", 00:27:06.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.127 "adrfam": "ipv4", 00:27:06.127 "trsvcid": "$NVMF_PORT", 00:27:06.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.127 "hdgst": ${hdgst:-false}, 00:27:06.127 "ddgst": ${ddgst:-false} 00:27:06.127 }, 00:27:06.127 "method": "bdev_nvme_attach_controller" 00:27:06.127 } 00:27:06.127 EOF 00:27:06.127 )") 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:27:06.127 01:40:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:27:06.127 "params": { 00:27:06.127 "name": "Nvme0", 00:27:06.128 "trtype": "tcp", 00:27:06.128 "traddr": "10.0.0.3", 00:27:06.128 "adrfam": "ipv4", 00:27:06.128 "trsvcid": "4420", 00:27:06.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:06.128 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:06.128 "hdgst": false, 00:27:06.128 "ddgst": false 00:27:06.128 }, 00:27:06.128 "method": "bdev_nvme_attach_controller" 00:27:06.128 }' 00:27:06.128 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:06.128 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:06.128 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:27:06.128 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:06.128 01:40:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:06.128 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:06.128 fio-3.35 00:27:06.128 Starting 1 thread 00:27:18.358 00:27:18.358 filename0: (groupid=0, jobs=1): err= 0: pid=89195: Sat Sep 28 01:40:12 2024 00:27:18.358 read: IOPS=7699, BW=30.1MiB/s (31.5MB/s)(301MiB/10001msec) 00:27:18.358 slat (nsec): min=7118, max=92414, avg=10235.80, stdev=4937.45 00:27:18.358 clat (usec): min=399, max=2162, avg=488.08, stdev=54.81 00:27:18.358 lat (usec): min=406, max=2176, avg=498.32, stdev=56.16 00:27:18.358 clat percentiles (usec): 00:27:18.358 | 1.00th=[ 408], 5.00th=[ 420], 10.00th=[ 433], 20.00th=[ 445], 00:27:18.358 | 30.00th=[ 457], 40.00th=[ 469], 50.00th=[ 478], 60.00th=[ 490], 00:27:18.358 | 70.00th=[ 506], 80.00th=[ 523], 90.00th=[ 553], 95.00th=[ 586], 00:27:18.358 | 99.00th=[ 668], 99.50th=[ 709], 99.90th=[ 807], 99.95th=[ 865], 00:27:18.358 | 99.99th=[ 1074] 00:27:18.358 bw ( KiB/s): min=28864, max=31840, per=99.94%, avg=30780.63, stdev=697.50, samples=19 00:27:18.358 iops : min= 7216, max= 7960, avg=7695.16, stdev=174.37, samples=19 00:27:18.358 lat (usec) : 500=66.22%, 750=33.52%, 1000=0.25% 00:27:18.358 lat (msec) : 2=0.01%, 4=0.01% 00:27:18.358 cpu : usr=84.88%, sys=13.18%, ctx=90, majf=0, minf=1061 00:27:18.358 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:18.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.358 issued rwts: total=77004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.358 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:18.358 00:27:18.358 Run status group 0 (all jobs): 00:27:18.358 READ: bw=30.1MiB/s (31.5MB/s), 30.1MiB/s-30.1MiB/s (31.5MB/s-31.5MB/s), io=301MiB (315MB), run=10001-10001msec 00:27:18.358 ----------------------------------------------------- 00:27:18.358 Suppressions used: 00:27:18.358 count bytes template 00:27:18.358 1 8 /usr/src/fio/parse.c 00:27:18.358 1 8 libtcmalloc_minimal.so 00:27:18.358 1 904 libcrypto.so 00:27:18.358 ----------------------------------------------------- 00:27:18.358 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:18.359 ************************************ 00:27:18.359 END TEST fio_dif_1_default 00:27:18.359 ************************************ 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.359 00:27:18.359 real 0m12.160s 00:27:18.359 user 0m10.239s 00:27:18.359 sys 0m1.650s 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:18.359 01:40:13 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:18.359 01:40:13 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:18.359 01:40:13 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:18.359 01:40:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:18.359 ************************************ 00:27:18.359 START TEST fio_dif_1_multi_subsystems 00:27:18.359 ************************************ 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:18.359 bdev_null0 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.359 01:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:18.359 [2024-09-28 01:40:14.014061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:18.359 bdev_null1 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:18.359 { 00:27:18.359 "params": { 00:27:18.359 "name": "Nvme$subsystem", 00:27:18.359 "trtype": "$TEST_TRANSPORT", 00:27:18.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.359 "adrfam": "ipv4", 00:27:18.359 "trsvcid": "$NVMF_PORT", 00:27:18.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.359 "hdgst": ${hdgst:-false}, 00:27:18.359 "ddgst": ${ddgst:-false} 00:27:18.359 }, 00:27:18.359 "method": "bdev_nvme_attach_controller" 00:27:18.359 } 00:27:18.359 EOF 00:27:18.359 )") 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:18.359 { 00:27:18.359 "params": { 00:27:18.359 "name": "Nvme$subsystem", 00:27:18.359 "trtype": "$TEST_TRANSPORT", 00:27:18.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.359 "adrfam": "ipv4", 00:27:18.359 "trsvcid": "$NVMF_PORT", 00:27:18.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.359 "hdgst": ${hdgst:-false}, 00:27:18.359 "ddgst": ${ddgst:-false} 00:27:18.359 }, 00:27:18.359 "method": "bdev_nvme_attach_controller" 00:27:18.359 } 00:27:18.359 EOF 00:27:18.359 )") 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:27:18.359 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:27:18.360 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:27:18.360 "params": { 00:27:18.360 "name": "Nvme0", 00:27:18.360 "trtype": "tcp", 00:27:18.360 "traddr": "10.0.0.3", 00:27:18.360 "adrfam": "ipv4", 00:27:18.360 "trsvcid": "4420", 00:27:18.360 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:18.360 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:18.360 "hdgst": false, 00:27:18.360 "ddgst": false 00:27:18.360 }, 00:27:18.360 "method": "bdev_nvme_attach_controller" 00:27:18.360 },{ 00:27:18.360 "params": { 00:27:18.360 "name": "Nvme1", 00:27:18.360 "trtype": "tcp", 00:27:18.360 "traddr": "10.0.0.3", 00:27:18.360 "adrfam": "ipv4", 00:27:18.360 "trsvcid": "4420", 00:27:18.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:18.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:18.360 "hdgst": false, 00:27:18.360 "ddgst": false 00:27:18.360 }, 00:27:18.360 "method": "bdev_nvme_attach_controller" 00:27:18.360 }' 00:27:18.360 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:18.360 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:18.360 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:27:18.360 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:18.360 01:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:18.620 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:18.620 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:18.620 fio-3.35 00:27:18.620 Starting 2 threads 00:27:30.826 00:27:30.826 filename0: (groupid=0, jobs=1): err= 0: pid=89359: Sat Sep 28 01:40:25 2024 00:27:30.826 read: IOPS=4274, BW=16.7MiB/s (17.5MB/s)(167MiB/10001msec) 00:27:30.826 slat (nsec): min=7545, max=60033, avg=14420.87, stdev=4992.76 00:27:30.826 clat (usec): min=502, max=4826, avg=895.59, stdev=73.51 00:27:30.826 lat (usec): min=510, max=4863, avg=910.01, stdev=74.14 00:27:30.826 clat percentiles (usec): 00:27:30.826 | 1.00th=[ 791], 5.00th=[ 816], 10.00th=[ 824], 20.00th=[ 840], 00:27:30.826 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[ 881], 60.00th=[ 898], 00:27:30.826 | 70.00th=[ 922], 80.00th=[ 947], 90.00th=[ 979], 95.00th=[ 1012], 00:27:30.826 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[ 1188], 99.95th=[ 1205], 00:27:30.826 | 99.99th=[ 1516] 00:27:30.826 bw ( KiB/s): min=16736, max=17504, per=50.04%, avg=17113.32, stdev=240.03, samples=19 00:27:30.826 iops : min= 4184, max= 4376, avg=4278.32, stdev=60.02, samples=19 00:27:30.826 lat (usec) : 750=0.02%, 1000=93.83% 00:27:30.826 lat (msec) : 2=6.14%, 10=0.01% 00:27:30.826 cpu : usr=90.99%, sys=7.66%, ctx=17, majf=0, minf=1075 00:27:30.826 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:30.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.826 issued rwts: total=42748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.826 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:30.826 filename1: (groupid=0, jobs=1): err= 0: pid=89360: Sat Sep 28 01:40:25 2024 00:27:30.826 read: IOPS=4275, BW=16.7MiB/s (17.5MB/s)(167MiB/10001msec) 00:27:30.826 slat (nsec): min=7618, max=75852, avg=14528.32, stdev=5037.94 00:27:30.826 clat (usec): min=594, max=1729, avg=894.59, stdev=74.80 00:27:30.826 lat (usec): min=606, max=1764, avg=909.11, stdev=76.42 00:27:30.826 clat percentiles (usec): 00:27:30.826 | 1.00th=[ 750], 5.00th=[ 783], 10.00th=[ 799], 20.00th=[ 832], 00:27:30.826 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[ 889], 60.00th=[ 906], 00:27:30.826 | 70.00th=[ 930], 80.00th=[ 955], 90.00th=[ 988], 95.00th=[ 1029], 00:27:30.826 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1188], 99.95th=[ 1205], 00:27:30.826 | 99.99th=[ 1254] 00:27:30.826 bw ( KiB/s): min=16736, max=17504, per=50.06%, avg=17120.00, stdev=235.39, samples=19 00:27:30.826 iops : min= 4184, max= 4376, avg=4280.00, stdev=58.85, samples=19 00:27:30.826 lat (usec) : 750=0.99%, 1000=90.61% 00:27:30.826 lat (msec) : 2=8.40% 00:27:30.826 cpu : usr=90.66%, sys=7.92%, ctx=12, majf=0, minf=1062 00:27:30.826 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:30.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.826 issued rwts: total=42764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.826 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:30.826 00:27:30.826 Run status group 0 (all jobs): 00:27:30.826 READ: bw=33.4MiB/s (35.0MB/s), 16.7MiB/s-16.7MiB/s (17.5MB/s-17.5MB/s), io=334MiB (350MB), run=10001-10001msec 00:27:30.826 ----------------------------------------------------- 00:27:30.826 Suppressions used: 00:27:30.826 count bytes template 00:27:30.826 2 16 /usr/src/fio/parse.c 00:27:30.826 1 8 libtcmalloc_minimal.so 00:27:30.826 1 904 libcrypto.so 00:27:30.826 ----------------------------------------------------- 00:27:30.826 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.826 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:30.826 ************************************ 00:27:30.826 END TEST fio_dif_1_multi_subsystems 00:27:30.827 ************************************ 00:27:30.827 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.827 00:27:30.827 real 0m12.367s 00:27:30.827 user 0m20.156s 00:27:30.827 sys 0m1.922s 00:27:30.827 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:30.827 01:40:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:30.827 01:40:26 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:30.827 01:40:26 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:30.827 01:40:26 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:30.827 01:40:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:30.827 ************************************ 00:27:30.827 START TEST fio_dif_rand_params 00:27:30.827 ************************************ 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.827 bdev_null0 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.827 [2024-09-28 01:40:26.429330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:30.827 { 00:27:30.827 "params": { 00:27:30.827 "name": "Nvme$subsystem", 00:27:30.827 "trtype": "$TEST_TRANSPORT", 00:27:30.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.827 "adrfam": "ipv4", 00:27:30.827 "trsvcid": "$NVMF_PORT", 00:27:30.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.827 "hdgst": ${hdgst:-false}, 00:27:30.827 "ddgst": ${ddgst:-false} 00:27:30.827 }, 00:27:30.827 "method": "bdev_nvme_attach_controller" 00:27:30.827 } 00:27:30.827 EOF 00:27:30.827 )") 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:27:30.827 "params": { 00:27:30.827 "name": "Nvme0", 00:27:30.827 "trtype": "tcp", 00:27:30.827 "traddr": "10.0.0.3", 00:27:30.827 "adrfam": "ipv4", 00:27:30.827 "trsvcid": "4420", 00:27:30.827 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:30.827 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:30.827 "hdgst": false, 00:27:30.827 "ddgst": false 00:27:30.827 }, 00:27:30.827 "method": "bdev_nvme_attach_controller" 00:27:30.827 }' 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:30.827 01:40:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:30.827 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:30.827 ... 00:27:30.827 fio-3.35 00:27:30.827 Starting 3 threads 00:27:37.396 00:27:37.396 filename0: (groupid=0, jobs=1): err= 0: pid=89517: Sat Sep 28 01:40:32 2024 00:27:37.396 read: IOPS=237, BW=29.7MiB/s (31.1MB/s)(149MiB/5006msec) 00:27:37.396 slat (nsec): min=5471, max=48049, avg=13164.24, stdev=6657.35 00:27:37.396 clat (usec): min=9649, max=14563, avg=12606.26, stdev=469.95 00:27:37.396 lat (usec): min=9658, max=14581, avg=12619.43, stdev=471.16 00:27:37.396 clat percentiles (usec): 00:27:37.396 | 1.00th=[12125], 5.00th=[12256], 10.00th=[12256], 20.00th=[12256], 00:27:37.396 | 30.00th=[12387], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:27:37.397 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13042], 95.00th=[13698], 00:27:37.397 | 99.00th=[14353], 99.50th=[14353], 99.90th=[14615], 99.95th=[14615], 00:27:37.397 | 99.99th=[14615] 00:27:37.397 bw ( KiB/s): min=29184, max=31488, per=33.32%, avg=30336.00, stdev=652.67, samples=10 00:27:37.397 iops : min= 228, max= 246, avg=237.00, stdev= 5.10, samples=10 00:27:37.397 lat (msec) : 10=0.25%, 20=99.75% 00:27:37.397 cpu : usr=91.53%, sys=7.89%, ctx=19, majf=0, minf=1075 00:27:37.397 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:37.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.397 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.397 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:37.397 filename0: (groupid=0, jobs=1): err= 0: pid=89518: Sat Sep 28 01:40:32 2024 00:27:37.397 read: IOPS=236, BW=29.6MiB/s (31.1MB/s)(148MiB/5002msec) 00:27:37.397 slat (nsec): min=5470, max=59697, avg=12801.21, stdev=6070.09 00:27:37.397 clat (usec): min=12065, max=19215, avg=12629.78, stdev=566.67 00:27:37.397 lat (usec): min=12073, max=19237, avg=12642.58, stdev=567.02 00:27:37.397 clat percentiles (usec): 00:27:37.397 | 1.00th=[12125], 5.00th=[12256], 10.00th=[12256], 20.00th=[12387], 00:27:37.397 | 30.00th=[12387], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:27:37.397 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13173], 95.00th=[13698], 00:27:37.397 | 99.00th=[14484], 99.50th=[14615], 99.90th=[19268], 99.95th=[19268], 00:27:37.397 | 99.99th=[19268] 00:27:37.397 bw ( KiB/s): min=29184, max=31488, per=33.28%, avg=30299.78, stdev=768.44, samples=9 00:27:37.397 iops : min= 228, max= 246, avg=236.67, stdev= 6.08, samples=9 00:27:37.397 lat (msec) : 20=100.00% 00:27:37.397 cpu : usr=91.74%, sys=7.50%, ctx=730, majf=0, minf=1073 00:27:37.397 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:37.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.397 issued rwts: total=1185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.397 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:37.397 filename0: (groupid=0, jobs=1): err= 0: pid=89519: Sat Sep 28 01:40:32 2024 00:27:37.397 read: IOPS=237, BW=29.7MiB/s (31.1MB/s)(149MiB/5006msec) 00:27:37.397 slat (nsec): min=5309, max=90337, avg=13985.38, stdev=7845.79 00:27:37.397 clat (usec): min=11489, max=14455, avg=12604.38, stdev=446.15 00:27:37.397 lat (usec): min=11534, max=14502, avg=12618.37, stdev=447.23 00:27:37.397 clat percentiles (usec): 00:27:37.397 | 1.00th=[12125], 5.00th=[12256], 10.00th=[12256], 20.00th=[12256], 00:27:37.397 | 30.00th=[12387], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:27:37.397 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13042], 95.00th=[13566], 00:27:37.397 | 99.00th=[14353], 99.50th=[14353], 99.90th=[14484], 99.95th=[14484], 00:27:37.397 | 99.99th=[14484] 00:27:37.397 bw ( KiB/s): min=29184, max=31488, per=33.32%, avg=30336.00, stdev=652.67, samples=10 00:27:37.397 iops : min= 228, max= 246, avg=237.00, stdev= 5.10, samples=10 00:27:37.397 lat (msec) : 20=100.00% 00:27:37.397 cpu : usr=92.11%, sys=7.25%, ctx=6, majf=0, minf=1075 00:27:37.397 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:37.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.397 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.397 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:37.397 00:27:37.397 Run status group 0 (all jobs): 00:27:37.397 READ: bw=88.9MiB/s (93.2MB/s), 29.6MiB/s-29.7MiB/s (31.1MB/s-31.1MB/s), io=445MiB (467MB), run=5002-5006msec 00:27:37.657 ----------------------------------------------------- 00:27:37.657 Suppressions used: 00:27:37.657 count bytes template 00:27:37.657 5 44 /usr/src/fio/parse.c 00:27:37.657 1 8 libtcmalloc_minimal.so 00:27:37.657 1 904 libcrypto.so 00:27:37.657 ----------------------------------------------------- 00:27:37.657 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:37.657 bdev_null0 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:37.657 [2024-09-28 01:40:33.520066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:37.657 bdev_null1 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:37.657 bdev_null2 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.657 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:37.917 { 00:27:37.917 "params": { 00:27:37.917 "name": "Nvme$subsystem", 00:27:37.917 "trtype": "$TEST_TRANSPORT", 00:27:37.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.917 "adrfam": "ipv4", 00:27:37.917 "trsvcid": "$NVMF_PORT", 00:27:37.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.917 "hdgst": ${hdgst:-false}, 00:27:37.917 "ddgst": ${ddgst:-false} 00:27:37.917 }, 00:27:37.917 "method": "bdev_nvme_attach_controller" 00:27:37.917 } 00:27:37.917 EOF 00:27:37.917 )") 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:37.917 { 00:27:37.917 "params": { 00:27:37.917 "name": "Nvme$subsystem", 00:27:37.917 "trtype": "$TEST_TRANSPORT", 00:27:37.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.917 "adrfam": "ipv4", 00:27:37.917 "trsvcid": "$NVMF_PORT", 00:27:37.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.917 "hdgst": ${hdgst:-false}, 00:27:37.917 "ddgst": ${ddgst:-false} 00:27:37.917 }, 00:27:37.917 "method": "bdev_nvme_attach_controller" 00:27:37.917 } 00:27:37.917 EOF 00:27:37.917 )") 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:37.917 { 00:27:37.917 "params": { 00:27:37.917 "name": "Nvme$subsystem", 00:27:37.917 "trtype": "$TEST_TRANSPORT", 00:27:37.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.917 "adrfam": "ipv4", 00:27:37.917 "trsvcid": "$NVMF_PORT", 00:27:37.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.917 "hdgst": ${hdgst:-false}, 00:27:37.917 "ddgst": ${ddgst:-false} 00:27:37.917 }, 00:27:37.917 "method": "bdev_nvme_attach_controller" 00:27:37.917 } 00:27:37.917 EOF 00:27:37.917 )") 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:27:37.917 01:40:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:27:37.917 "params": { 00:27:37.917 "name": "Nvme0", 00:27:37.917 "trtype": "tcp", 00:27:37.917 "traddr": "10.0.0.3", 00:27:37.917 "adrfam": "ipv4", 00:27:37.917 "trsvcid": "4420", 00:27:37.918 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:37.918 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:37.918 "hdgst": false, 00:27:37.918 "ddgst": false 00:27:37.918 }, 00:27:37.918 "method": "bdev_nvme_attach_controller" 00:27:37.918 },{ 00:27:37.918 "params": { 00:27:37.918 "name": "Nvme1", 00:27:37.918 "trtype": "tcp", 00:27:37.918 "traddr": "10.0.0.3", 00:27:37.918 "adrfam": "ipv4", 00:27:37.918 "trsvcid": "4420", 00:27:37.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:37.918 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:37.918 "hdgst": false, 00:27:37.918 "ddgst": false 00:27:37.918 }, 00:27:37.918 "method": "bdev_nvme_attach_controller" 00:27:37.918 },{ 00:27:37.918 "params": { 00:27:37.918 "name": "Nvme2", 00:27:37.918 "trtype": "tcp", 00:27:37.918 "traddr": "10.0.0.3", 00:27:37.918 "adrfam": "ipv4", 00:27:37.918 "trsvcid": "4420", 00:27:37.918 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:37.918 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:37.918 "hdgst": false, 00:27:37.918 "ddgst": false 00:27:37.918 }, 00:27:37.918 "method": "bdev_nvme_attach_controller" 00:27:37.918 }' 00:27:37.918 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:37.918 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:37.918 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:27:37.918 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:37.918 01:40:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.177 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:38.177 ... 00:27:38.177 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:38.177 ... 00:27:38.177 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:38.177 ... 00:27:38.177 fio-3.35 00:27:38.177 Starting 24 threads 00:27:50.393 00:27:50.393 filename0: (groupid=0, jobs=1): err= 0: pid=89623: Sat Sep 28 01:40:45 2024 00:27:50.393 read: IOPS=173, BW=696KiB/s (712kB/s)(7016KiB/10087msec) 00:27:50.393 slat (usec): min=5, max=8036, avg=24.90, stdev=229.52 00:27:50.393 clat (msec): min=36, max=155, avg=91.62, stdev=20.29 00:27:50.393 lat (msec): min=36, max=155, avg=91.64, stdev=20.30 00:27:50.393 clat percentiles (msec): 00:27:50.393 | 1.00th=[ 51], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 78], 00:27:50.393 | 30.00th=[ 82], 40.00th=[ 88], 50.00th=[ 92], 60.00th=[ 96], 00:27:50.393 | 70.00th=[ 100], 80.00th=[ 107], 90.00th=[ 118], 95.00th=[ 129], 00:27:50.393 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:27:50.393 | 99.99th=[ 157] 00:27:50.393 bw ( KiB/s): min= 512, max= 792, per=3.86%, avg=695.20, stdev=75.71, samples=20 00:27:50.393 iops : min= 128, max= 198, avg=173.80, stdev=18.93, samples=20 00:27:50.393 lat (msec) : 50=0.86%, 100=71.27%, 250=27.88% 00:27:50.393 cpu : usr=41.35%, sys=2.66%, ctx=1200, majf=0, minf=1075 00:27:50.393 IO depths : 1=0.1%, 2=3.0%, 4=12.0%, 8=70.3%, 16=14.6%, 32=0.0%, >=64=0.0% 00:27:50.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.393 complete : 0=0.0%, 4=90.7%, 8=6.7%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.393 issued rwts: total=1754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.393 filename0: (groupid=0, jobs=1): err= 0: pid=89624: Sat Sep 28 01:40:45 2024 00:27:50.393 read: IOPS=190, BW=762KiB/s (780kB/s)(7644KiB/10034msec) 00:27:50.393 slat (usec): min=5, max=8030, avg=24.61, stdev=205.03 00:27:50.393 clat (msec): min=17, max=163, avg=83.81, stdev=20.46 00:27:50.393 lat (msec): min=17, max=163, avg=83.84, stdev=20.46 00:27:50.393 clat percentiles (msec): 00:27:50.393 | 1.00th=[ 48], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 64], 00:27:50.393 | 30.00th=[ 71], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 90], 00:27:50.393 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 120], 00:27:50.393 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 163], 99.95th=[ 163], 00:27:50.393 | 99.99th=[ 163] 00:27:50.393 bw ( KiB/s): min= 640, max= 872, per=4.21%, avg=758.00, stdev=61.41, samples=20 00:27:50.393 iops : min= 160, max= 218, avg=189.50, stdev=15.35, samples=20 00:27:50.393 lat (msec) : 20=0.16%, 50=1.83%, 100=81.11%, 250=16.90% 00:27:50.393 cpu : usr=37.83%, sys=2.61%, ctx=1102, majf=0, minf=1072 00:27:50.393 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=78.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:27:50.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.393 complete : 0=0.0%, 4=88.1%, 8=10.7%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.393 issued rwts: total=1911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.393 filename0: (groupid=0, jobs=1): err= 0: pid=89625: Sat Sep 28 01:40:45 2024 00:27:50.393 read: IOPS=186, BW=746KiB/s (763kB/s)(7508KiB/10071msec) 00:27:50.393 slat (usec): min=4, max=8032, avg=28.66, stdev=320.35 00:27:50.393 clat (msec): min=7, max=179, avg=85.56, stdev=24.47 00:27:50.393 lat (msec): min=7, max=179, avg=85.59, stdev=24.48 00:27:50.393 clat percentiles (msec): 00:27:50.393 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 60], 20.00th=[ 70], 00:27:50.393 | 30.00th=[ 82], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 96], 00:27:50.393 | 70.00th=[ 96], 80.00th=[ 100], 90.00th=[ 109], 95.00th=[ 121], 00:27:50.393 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 169], 99.95th=[ 180], 00:27:50.393 | 99.99th=[ 180] 00:27:50.393 bw ( KiB/s): min= 600, max= 1386, per=4.14%, avg=744.10, stdev=162.00, samples=20 00:27:50.393 iops : min= 150, max= 346, avg=185.95, stdev=40.41, samples=20 00:27:50.393 lat (msec) : 10=0.85%, 20=1.60%, 50=4.37%, 100=73.68%, 250=19.50% 00:27:50.393 cpu : usr=32.65%, sys=2.23%, ctx=854, majf=0, minf=1073 00:27:50.393 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=79.0%, 16=16.6%, 32=0.0%, >=64=0.0% 00:27:50.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.393 complete : 0=0.0%, 4=88.9%, 8=10.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.393 issued rwts: total=1877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.393 filename0: (groupid=0, jobs=1): err= 0: pid=89626: Sat Sep 28 01:40:45 2024 00:27:50.393 read: IOPS=180, BW=723KiB/s (741kB/s)(7288KiB/10076msec) 00:27:50.393 slat (usec): min=5, max=4033, avg=23.29, stdev=150.83 00:27:50.393 clat (msec): min=15, max=167, avg=88.12, stdev=22.45 00:27:50.393 lat (msec): min=15, max=167, avg=88.14, stdev=22.45 00:27:50.393 clat percentiles (msec): 00:27:50.393 | 1.00th=[ 29], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 68], 00:27:50.393 | 30.00th=[ 80], 40.00th=[ 85], 50.00th=[ 91], 60.00th=[ 95], 00:27:50.393 | 70.00th=[ 100], 80.00th=[ 105], 90.00th=[ 112], 95.00th=[ 121], 00:27:50.393 | 99.00th=[ 144], 99.50th=[ 167], 99.90th=[ 167], 99.95th=[ 167], 00:27:50.393 | 99.99th=[ 167] 00:27:50.393 bw ( KiB/s): min= 525, max= 908, per=4.01%, avg=722.05, stdev=93.93, samples=20 00:27:50.393 iops : min= 131, max= 227, avg=180.50, stdev=23.51, samples=20 00:27:50.393 lat (msec) : 20=0.88%, 50=1.92%, 100=68.61%, 250=28.59% 00:27:50.393 cpu : usr=39.75%, sys=2.74%, ctx=1419, majf=0, minf=1073 00:27:50.393 IO depths : 1=0.1%, 2=2.4%, 4=9.8%, 8=72.9%, 16=14.8%, 32=0.0%, >=64=0.0% 00:27:50.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.393 complete : 0=0.0%, 4=89.9%, 8=8.0%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.393 issued rwts: total=1822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.393 filename0: (groupid=0, jobs=1): err= 0: pid=89627: Sat Sep 28 01:40:45 2024 00:27:50.393 read: IOPS=187, BW=749KiB/s (767kB/s)(7536KiB/10064msec) 00:27:50.393 slat (usec): min=4, max=8037, avg=34.13, stdev=369.18 00:27:50.393 clat (msec): min=26, max=143, avg=85.14, stdev=19.79 00:27:50.393 lat (msec): min=26, max=144, avg=85.17, stdev=19.80 00:27:50.393 clat percentiles (msec): 00:27:50.393 | 1.00th=[ 37], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 64], 00:27:50.393 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 93], 00:27:50.394 | 70.00th=[ 96], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 118], 00:27:50.394 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:27:50.394 | 99.99th=[ 144] 00:27:50.394 bw ( KiB/s): min= 664, max= 896, per=4.15%, avg=747.05, stdev=54.18, samples=20 00:27:50.394 iops : min= 166, max= 224, avg=186.75, stdev=13.55, samples=20 00:27:50.394 lat (msec) : 50=2.39%, 100=81.90%, 250=15.71% 00:27:50.394 cpu : usr=32.39%, sys=2.47%, ctx=981, majf=0, minf=1073 00:27:50.394 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:27:50.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.394 complete : 0=0.0%, 4=88.8%, 8=10.0%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.394 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.394 filename0: (groupid=0, jobs=1): err= 0: pid=89628: Sat Sep 28 01:40:45 2024 00:27:50.394 read: IOPS=185, BW=743KiB/s (761kB/s)(7484KiB/10069msec) 00:27:50.394 slat (usec): min=5, max=4030, avg=21.21, stdev=124.16 00:27:50.394 clat (msec): min=28, max=145, avg=85.82, stdev=19.70 00:27:50.394 lat (msec): min=28, max=145, avg=85.84, stdev=19.70 00:27:50.394 clat percentiles (msec): 00:27:50.394 | 1.00th=[ 48], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 65], 00:27:50.394 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 94], 00:27:50.394 | 70.00th=[ 96], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 118], 00:27:50.394 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:27:50.394 | 99.99th=[ 146] 00:27:50.394 bw ( KiB/s): min= 640, max= 880, per=4.12%, avg=741.80, stdev=54.73, samples=20 00:27:50.394 iops : min= 160, max= 220, avg=185.40, stdev=13.71, samples=20 00:27:50.394 lat (msec) : 50=1.44%, 100=80.38%, 250=18.17% 00:27:50.394 cpu : usr=38.82%, sys=2.26%, ctx=1138, majf=0, minf=1074 00:27:50.394 IO depths : 1=0.1%, 2=1.2%, 4=5.0%, 8=78.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:27:50.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.394 complete : 0=0.0%, 4=88.5%, 8=10.4%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.394 issued rwts: total=1871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.394 filename0: (groupid=0, jobs=1): err= 0: pid=89629: Sat Sep 28 01:40:45 2024 00:27:50.394 read: IOPS=187, BW=748KiB/s (766kB/s)(7508KiB/10031msec) 00:27:50.394 slat (usec): min=4, max=8033, avg=26.65, stdev=261.53 00:27:50.394 clat (msec): min=47, max=176, avg=85.35, stdev=20.82 00:27:50.394 lat (msec): min=47, max=176, avg=85.38, stdev=20.81 00:27:50.394 clat percentiles (msec): 00:27:50.394 | 1.00th=[ 50], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 63], 00:27:50.394 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 94], 00:27:50.394 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 121], 00:27:50.394 | 99.00th=[ 144], 99.50th=[ 159], 99.90th=[ 178], 99.95th=[ 178], 00:27:50.394 | 99.99th=[ 178] 00:27:50.394 bw ( KiB/s): min= 512, max= 848, per=4.14%, avg=744.05, stdev=75.68, samples=20 00:27:50.394 iops : min= 128, max= 212, avg=185.95, stdev=18.91, samples=20 00:27:50.394 lat (msec) : 50=1.28%, 100=81.03%, 250=17.69% 00:27:50.394 cpu : usr=31.39%, sys=1.85%, ctx=872, majf=0, minf=1074 00:27:50.394 IO depths : 1=0.1%, 2=1.3%, 4=5.1%, 8=78.5%, 16=15.1%, 32=0.0%, >=64=0.0% 00:27:50.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.394 complete : 0=0.0%, 4=88.2%, 8=10.7%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.394 issued rwts: total=1877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.394 filename0: (groupid=0, jobs=1): err= 0: pid=89630: Sat Sep 28 01:40:45 2024 00:27:50.394 read: IOPS=188, BW=756KiB/s (774kB/s)(7600KiB/10059msec) 00:27:50.394 slat (usec): min=4, max=8036, avg=30.17, stdev=318.37 00:27:50.394 clat (msec): min=47, max=144, avg=84.45, stdev=18.95 00:27:50.394 lat (msec): min=47, max=144, avg=84.48, stdev=18.96 00:27:50.394 clat percentiles (msec): 00:27:50.394 | 1.00th=[ 49], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 66], 00:27:50.394 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 93], 00:27:50.394 | 70.00th=[ 96], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 115], 00:27:50.394 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:27:50.394 | 99.99th=[ 146] 00:27:50.394 bw ( KiB/s): min= 664, max= 816, per=4.18%, avg=752.95, stdev=36.06, samples=20 00:27:50.394 iops : min= 166, max= 204, avg=188.20, stdev= 8.99, samples=20 00:27:50.394 lat (msec) : 50=2.47%, 100=81.11%, 250=16.42% 00:27:50.394 cpu : usr=33.11%, sys=2.13%, ctx=1021, majf=0, minf=1074 00:27:50.394 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=77.6%, 16=15.3%, 32=0.0%, >=64=0.0% 00:27:50.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.394 complete : 0=0.0%, 4=88.5%, 8=10.2%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.394 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.394 filename1: (groupid=0, jobs=1): err= 0: pid=89631: Sat Sep 28 01:40:45 2024 00:27:50.394 read: IOPS=195, BW=784KiB/s (802kB/s)(7852KiB/10021msec) 00:27:50.394 slat (nsec): min=4247, max=43092, avg=17388.27, stdev=5544.91 00:27:50.394 clat (msec): min=24, max=193, avg=81.59, stdev=22.86 00:27:50.394 lat (msec): min=24, max=193, avg=81.61, stdev=22.86 00:27:50.394 clat percentiles (msec): 00:27:50.394 | 1.00th=[ 36], 5.00th=[ 49], 10.00th=[ 58], 20.00th=[ 61], 00:27:50.394 | 30.00th=[ 66], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 87], 00:27:50.394 | 70.00th=[ 95], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 120], 00:27:50.394 | 99.00th=[ 144], 99.50th=[ 180], 99.90th=[ 194], 99.95th=[ 194], 00:27:50.394 | 99.99th=[ 194] 00:27:50.394 bw ( KiB/s): min= 601, max= 920, per=4.31%, avg=775.21, stdev=69.08, samples=19 00:27:50.394 iops : min= 150, max= 230, avg=193.79, stdev=17.31, samples=19 00:27:50.394 lat (msec) : 50=5.91%, 100=79.83%, 250=14.26% 00:27:50.394 cpu : usr=31.81%, sys=2.29%, ctx=923, majf=0, minf=1074 00:27:50.394 IO depths : 1=0.1%, 2=0.2%, 4=1.0%, 8=83.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:27:50.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.394 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.394 issued rwts: total=1963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.394 filename1: (groupid=0, jobs=1): err= 0: pid=89632: Sat Sep 28 01:40:45 2024 00:27:50.394 read: IOPS=196, BW=786KiB/s (805kB/s)(7880KiB/10028msec) 00:27:50.394 slat (usec): min=4, max=16031, avg=35.84, stdev=429.16 00:27:50.394 clat (msec): min=27, max=182, avg=81.12, stdev=21.64 00:27:50.394 lat (msec): min=27, max=182, avg=81.16, stdev=21.66 00:27:50.394 clat percentiles (msec): 00:27:50.394 | 1.00th=[ 40], 5.00th=[ 51], 10.00th=[ 56], 20.00th=[ 63], 00:27:50.394 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 84], 60.00th=[ 89], 00:27:50.394 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 118], 00:27:50.394 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 167], 99.95th=[ 184], 00:27:50.394 | 99.99th=[ 184] 00:27:50.394 bw ( KiB/s): min= 576, max= 920, per=4.34%, avg=780.47, stdev=82.13, samples=19 00:27:50.394 iops : min= 144, max= 230, avg=195.11, stdev=20.52, samples=19 00:27:50.394 lat (msec) : 50=4.77%, 100=79.59%, 250=15.63% 00:27:50.394 cpu : usr=40.00%, sys=3.21%, ctx=1323, majf=0, minf=1074 00:27:50.394 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=83.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:27:50.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.394 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.394 issued rwts: total=1970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.394 filename1: (groupid=0, jobs=1): err= 0: pid=89633: Sat Sep 28 01:40:45 2024 00:27:50.394 read: IOPS=190, BW=764KiB/s (782kB/s)(7680KiB/10057msec) 00:27:50.394 slat (usec): min=5, max=8038, avg=26.18, stdev=258.79 00:27:50.394 clat (msec): min=24, max=147, avg=83.52, stdev=20.00 00:27:50.394 lat (msec): min=24, max=147, avg=83.54, stdev=20.00 00:27:50.394 clat percentiles (msec): 00:27:50.394 | 1.00th=[ 29], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 64], 00:27:50.394 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 91], 00:27:50.394 | 70.00th=[ 96], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 111], 00:27:50.394 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 148], 99.95th=[ 148], 00:27:50.394 | 99.99th=[ 148] 00:27:50.394 bw ( KiB/s): min= 664, max= 896, per=4.24%, avg=763.75, stdev=54.83, samples=20 00:27:50.394 iops : min= 166, max= 224, avg=190.90, stdev=13.73, samples=20 00:27:50.394 lat (msec) : 50=3.12%, 100=81.72%, 250=15.16% 00:27:50.394 cpu : usr=32.74%, sys=2.46%, ctx=996, majf=0, minf=1071 00:27:50.394 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=79.0%, 16=15.5%, 32=0.0%, >=64=0.0% 00:27:50.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.394 complete : 0=0.0%, 4=88.2%, 8=10.8%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.394 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.394 filename1: (groupid=0, jobs=1): err= 0: pid=89634: Sat Sep 28 01:40:45 2024 00:27:50.394 read: IOPS=185, BW=744KiB/s (762kB/s)(7468KiB/10038msec) 00:27:50.394 slat (usec): min=4, max=8033, avg=27.72, stdev=259.72 00:27:50.394 clat (msec): min=39, max=164, avg=85.79, stdev=20.03 00:27:50.394 lat (msec): min=39, max=164, avg=85.81, stdev=20.02 00:27:50.394 clat percentiles (msec): 00:27:50.394 | 1.00th=[ 52], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 65], 00:27:50.394 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 87], 60.00th=[ 91], 00:27:50.394 | 70.00th=[ 96], 80.00th=[ 102], 90.00th=[ 109], 95.00th=[ 120], 00:27:50.394 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 165], 99.95th=[ 165], 00:27:50.394 | 99.99th=[ 165] 00:27:50.394 bw ( KiB/s): min= 513, max= 824, per=4.12%, avg=742.15, stdev=78.45, samples=20 00:27:50.394 iops : min= 128, max= 206, avg=185.50, stdev=19.64, samples=20 00:27:50.394 lat (msec) : 50=0.80%, 100=76.49%, 250=22.71% 00:27:50.394 cpu : usr=40.22%, sys=2.33%, ctx=1469, majf=0, minf=1073 00:27:50.394 IO depths : 1=0.1%, 2=1.5%, 4=5.9%, 8=77.5%, 16=15.0%, 32=0.0%, >=64=0.0% 00:27:50.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.394 complete : 0=0.0%, 4=88.5%, 8=10.2%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.394 issued rwts: total=1867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.394 filename1: (groupid=0, jobs=1): err= 0: pid=89635: Sat Sep 28 01:40:45 2024 00:27:50.394 read: IOPS=187, BW=749KiB/s (767kB/s)(7504KiB/10017msec) 00:27:50.394 slat (usec): min=5, max=6058, avg=37.52, stdev=292.61 00:27:50.394 clat (msec): min=46, max=144, avg=85.20, stdev=19.65 00:27:50.395 lat (msec): min=46, max=144, avg=85.23, stdev=19.66 00:27:50.395 clat percentiles (msec): 00:27:50.395 | 1.00th=[ 53], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 65], 00:27:50.395 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 87], 60.00th=[ 91], 00:27:50.395 | 70.00th=[ 95], 80.00th=[ 101], 90.00th=[ 111], 95.00th=[ 122], 00:27:50.395 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:27:50.395 | 99.99th=[ 146] 00:27:50.395 bw ( KiB/s): min= 640, max= 824, per=4.14%, avg=744.89, stdev=61.42, samples=19 00:27:50.395 iops : min= 160, max= 206, avg=186.21, stdev=15.35, samples=19 00:27:50.395 lat (msec) : 50=0.64%, 100=79.85%, 250=19.51% 00:27:50.395 cpu : usr=41.50%, sys=2.97%, ctx=1659, majf=0, minf=1073 00:27:50.395 IO depths : 1=0.1%, 2=1.3%, 4=5.1%, 8=78.5%, 16=15.1%, 32=0.0%, >=64=0.0% 00:27:50.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.395 complete : 0=0.0%, 4=88.2%, 8=10.7%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.395 issued rwts: total=1876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.395 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.395 filename1: (groupid=0, jobs=1): err= 0: pid=89636: Sat Sep 28 01:40:45 2024 00:27:50.395 read: IOPS=174, BW=699KiB/s (716kB/s)(7008KiB/10026msec) 00:27:50.395 slat (nsec): min=4302, max=44580, avg=16173.75, stdev=5400.95 00:27:50.395 clat (msec): min=26, max=183, avg=91.45, stdev=22.70 00:27:50.395 lat (msec): min=26, max=183, avg=91.47, stdev=22.70 00:27:50.395 clat percentiles (msec): 00:27:50.395 | 1.00th=[ 48], 5.00th=[ 61], 10.00th=[ 61], 20.00th=[ 72], 00:27:50.395 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 94], 60.00th=[ 96], 00:27:50.395 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 140], 00:27:50.395 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 184], 99.95th=[ 184], 00:27:50.395 | 99.99th=[ 184] 00:27:50.395 bw ( KiB/s): min= 512, max= 812, per=3.81%, avg=686.58, stdev=95.98, samples=19 00:27:50.395 iops : min= 128, max= 203, avg=171.63, stdev=24.00, samples=19 00:27:50.395 lat (msec) : 50=1.77%, 100=72.32%, 250=25.91% 00:27:50.395 cpu : usr=31.07%, sys=2.21%, ctx=861, majf=0, minf=1074 00:27:50.395 IO depths : 1=0.1%, 2=3.2%, 4=12.7%, 8=70.0%, 16=14.0%, 32=0.0%, >=64=0.0% 00:27:50.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.395 complete : 0=0.0%, 4=90.5%, 8=6.7%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.395 issued rwts: total=1752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.395 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.395 filename1: (groupid=0, jobs=1): err= 0: pid=89637: Sat Sep 28 01:40:45 2024 00:27:50.395 read: IOPS=189, BW=758KiB/s (777kB/s)(7640KiB/10074msec) 00:27:50.395 slat (usec): min=5, max=4034, avg=21.13, stdev=130.08 00:27:50.395 clat (msec): min=15, max=144, avg=84.09, stdev=21.62 00:27:50.395 lat (msec): min=15, max=144, avg=84.11, stdev=21.62 00:27:50.395 clat percentiles (msec): 00:27:50.395 | 1.00th=[ 27], 5.00th=[ 56], 10.00th=[ 60], 20.00th=[ 64], 00:27:50.395 | 30.00th=[ 71], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 93], 00:27:50.395 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 120], 00:27:50.395 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:27:50.395 | 99.99th=[ 144] 00:27:50.395 bw ( KiB/s): min= 640, max= 1003, per=4.21%, avg=757.20, stdev=82.65, samples=20 00:27:50.395 iops : min= 160, max= 250, avg=189.25, stdev=20.55, samples=20 00:27:50.395 lat (msec) : 20=0.84%, 50=2.51%, 100=79.58%, 250=17.07% 00:27:50.395 cpu : usr=37.76%, sys=2.70%, ctx=1145, majf=0, minf=1075 00:27:50.395 IO depths : 1=0.1%, 2=1.2%, 4=4.3%, 8=79.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:27:50.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.395 complete : 0=0.0%, 4=88.2%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.395 issued rwts: total=1910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.395 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.395 filename1: (groupid=0, jobs=1): err= 0: pid=89638: Sat Sep 28 01:40:45 2024 00:27:50.395 read: IOPS=193, BW=776KiB/s (794kB/s)(7812KiB/10072msec) 00:27:50.395 slat (usec): min=4, max=12030, avg=36.56, stdev=424.92 00:27:50.395 clat (usec): min=1969, max=167984, avg=82152.40, stdev=28649.33 00:27:50.395 lat (usec): min=1978, max=167996, avg=82188.96, stdev=28652.92 00:27:50.395 clat percentiles (msec): 00:27:50.395 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 56], 20.00th=[ 64], 00:27:50.395 | 30.00th=[ 75], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 95], 00:27:50.395 | 70.00th=[ 96], 80.00th=[ 100], 90.00th=[ 111], 95.00th=[ 121], 00:27:50.395 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 169], 00:27:50.395 | 99.99th=[ 169] 00:27:50.395 bw ( KiB/s): min= 617, max= 2019, per=4.31%, avg=776.75, stdev=296.33, samples=20 00:27:50.395 iops : min= 154, max= 504, avg=194.10, stdev=73.93, samples=20 00:27:50.395 lat (msec) : 2=0.20%, 4=3.07%, 10=2.46%, 20=0.82%, 50=2.61% 00:27:50.395 lat (msec) : 100=72.56%, 250=18.28% 00:27:50.395 cpu : usr=34.82%, sys=2.24%, ctx=972, majf=0, minf=1075 00:27:50.395 IO depths : 1=0.3%, 2=2.1%, 4=7.5%, 8=74.7%, 16=15.5%, 32=0.0%, >=64=0.0% 00:27:50.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.395 complete : 0=0.0%, 4=89.6%, 8=8.8%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.395 issued rwts: total=1953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.395 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.395 filename2: (groupid=0, jobs=1): err= 0: pid=89639: Sat Sep 28 01:40:45 2024 00:27:50.395 read: IOPS=191, BW=767KiB/s (785kB/s)(7692KiB/10034msec) 00:27:50.395 slat (usec): min=4, max=8100, avg=26.19, stdev=244.08 00:27:50.395 clat (msec): min=33, max=163, avg=83.29, stdev=20.17 00:27:50.395 lat (msec): min=33, max=163, avg=83.31, stdev=20.17 00:27:50.395 clat percentiles (msec): 00:27:50.395 | 1.00th=[ 43], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 64], 00:27:50.395 | 30.00th=[ 70], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 90], 00:27:50.395 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 118], 00:27:50.395 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 163], 99.95th=[ 163], 00:27:50.395 | 99.99th=[ 163] 00:27:50.395 bw ( KiB/s): min= 640, max= 888, per=4.24%, avg=762.80, stdev=75.73, samples=20 00:27:50.395 iops : min= 160, max= 222, avg=190.70, stdev=18.93, samples=20 00:27:50.395 lat (msec) : 50=2.96%, 100=81.70%, 250=15.34% 00:27:50.395 cpu : usr=37.58%, sys=2.37%, ctx=1177, majf=0, minf=1072 00:27:50.395 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=80.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:27:50.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.395 complete : 0=0.0%, 4=87.8%, 8=11.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.395 issued rwts: total=1923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.395 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.395 filename2: (groupid=0, jobs=1): err= 0: pid=89640: Sat Sep 28 01:40:45 2024 00:27:50.395 read: IOPS=195, BW=783KiB/s (801kB/s)(7844KiB/10022msec) 00:27:50.395 slat (usec): min=5, max=8029, avg=23.90, stdev=202.47 00:27:50.395 clat (msec): min=24, max=192, avg=81.65, stdev=22.32 00:27:50.395 lat (msec): min=24, max=192, avg=81.68, stdev=22.32 00:27:50.395 clat percentiles (msec): 00:27:50.395 | 1.00th=[ 36], 5.00th=[ 53], 10.00th=[ 58], 20.00th=[ 62], 00:27:50.395 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 88], 00:27:50.395 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 117], 00:27:50.395 | 99.00th=[ 144], 99.50th=[ 176], 99.90th=[ 192], 99.95th=[ 192], 00:27:50.395 | 99.99th=[ 192] 00:27:50.395 bw ( KiB/s): min= 592, max= 896, per=4.30%, avg=774.74, stdev=69.45, samples=19 00:27:50.395 iops : min= 148, max= 224, avg=193.68, stdev=17.36, samples=19 00:27:50.395 lat (msec) : 50=4.03%, 100=80.57%, 250=15.40% 00:27:50.395 cpu : usr=41.96%, sys=2.90%, ctx=1086, majf=0, minf=1071 00:27:50.395 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:27:50.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.395 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.395 issued rwts: total=1961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.395 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.395 filename2: (groupid=0, jobs=1): err= 0: pid=89641: Sat Sep 28 01:40:45 2024 00:27:50.395 read: IOPS=190, BW=762KiB/s (780kB/s)(7664KiB/10057msec) 00:27:50.395 slat (usec): min=4, max=4032, avg=28.43, stdev=199.16 00:27:50.395 clat (msec): min=30, max=144, avg=83.77, stdev=19.82 00:27:50.395 lat (msec): min=30, max=144, avg=83.80, stdev=19.82 00:27:50.395 clat percentiles (msec): 00:27:50.395 | 1.00th=[ 34], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 64], 00:27:50.395 | 30.00th=[ 71], 40.00th=[ 81], 50.00th=[ 87], 60.00th=[ 90], 00:27:50.395 | 70.00th=[ 93], 80.00th=[ 99], 90.00th=[ 106], 95.00th=[ 116], 00:27:50.395 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:27:50.395 | 99.99th=[ 146] 00:27:50.395 bw ( KiB/s): min= 640, max= 897, per=4.23%, avg=761.10, stdev=60.95, samples=20 00:27:50.395 iops : min= 160, max= 224, avg=190.25, stdev=15.22, samples=20 00:27:50.395 lat (msec) : 50=2.09%, 100=80.85%, 250=17.07% 00:27:50.395 cpu : usr=40.42%, sys=2.72%, ctx=1493, majf=0, minf=1073 00:27:50.395 IO depths : 1=0.1%, 2=1.3%, 4=5.0%, 8=78.5%, 16=15.2%, 32=0.0%, >=64=0.0% 00:27:50.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.395 complete : 0=0.0%, 4=88.3%, 8=10.6%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.395 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.395 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.395 filename2: (groupid=0, jobs=1): err= 0: pid=89642: Sat Sep 28 01:40:45 2024 00:27:50.395 read: IOPS=196, BW=788KiB/s (807kB/s)(7900KiB/10027msec) 00:27:50.395 slat (usec): min=4, max=8037, avg=29.41, stdev=312.37 00:27:50.395 clat (msec): min=18, max=196, avg=81.09, stdev=23.20 00:27:50.395 lat (msec): min=18, max=196, avg=81.12, stdev=23.21 00:27:50.395 clat percentiles (msec): 00:27:50.395 | 1.00th=[ 34], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 61], 00:27:50.395 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 86], 00:27:50.395 | 70.00th=[ 95], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:27:50.395 | 99.00th=[ 144], 99.50th=[ 182], 99.90th=[ 197], 99.95th=[ 197], 00:27:50.395 | 99.99th=[ 197] 00:27:50.395 bw ( KiB/s): min= 576, max= 920, per=4.32%, avg=778.74, stdev=81.82, samples=19 00:27:50.395 iops : min= 144, max= 230, avg=194.68, stdev=20.46, samples=19 00:27:50.395 lat (msec) : 20=0.20%, 50=5.82%, 100=79.75%, 250=14.23% 00:27:50.395 cpu : usr=34.05%, sys=2.22%, ctx=873, majf=0, minf=1074 00:27:50.395 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=83.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:27:50.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.395 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.396 issued rwts: total=1975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.396 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.396 filename2: (groupid=0, jobs=1): err= 0: pid=89643: Sat Sep 28 01:40:45 2024 00:27:50.396 read: IOPS=187, BW=751KiB/s (769kB/s)(7580KiB/10097msec) 00:27:50.396 slat (usec): min=4, max=8038, avg=23.76, stdev=206.14 00:27:50.396 clat (msec): min=7, max=169, avg=84.83, stdev=24.53 00:27:50.396 lat (msec): min=7, max=169, avg=84.85, stdev=24.54 00:27:50.396 clat percentiles (msec): 00:27:50.396 | 1.00th=[ 11], 5.00th=[ 35], 10.00th=[ 60], 20.00th=[ 64], 00:27:50.396 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 95], 00:27:50.396 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 109], 95.00th=[ 122], 00:27:50.396 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 163], 99.95th=[ 169], 00:27:50.396 | 99.99th=[ 169] 00:27:50.396 bw ( KiB/s): min= 640, max= 1380, per=4.19%, avg=753.40, stdev=154.28, samples=20 00:27:50.396 iops : min= 160, max= 345, avg=188.35, stdev=38.57, samples=20 00:27:50.396 lat (msec) : 10=0.84%, 20=2.43%, 50=2.11%, 100=75.30%, 250=19.31% 00:27:50.396 cpu : usr=36.50%, sys=2.55%, ctx=1212, majf=0, minf=1073 00:27:50.396 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=75.6%, 16=15.4%, 32=0.0%, >=64=0.0% 00:27:50.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.396 complete : 0=0.0%, 4=89.3%, 8=9.1%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.396 issued rwts: total=1895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.396 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.396 filename2: (groupid=0, jobs=1): err= 0: pid=89644: Sat Sep 28 01:40:45 2024 00:27:50.396 read: IOPS=188, BW=752KiB/s (771kB/s)(7596KiB/10095msec) 00:27:50.396 slat (usec): min=5, max=8033, avg=36.71, stdev=331.46 00:27:50.396 clat (msec): min=7, max=144, avg=84.74, stdev=24.12 00:27:50.396 lat (msec): min=7, max=144, avg=84.78, stdev=24.12 00:27:50.396 clat percentiles (msec): 00:27:50.396 | 1.00th=[ 12], 5.00th=[ 35], 10.00th=[ 61], 20.00th=[ 69], 00:27:50.396 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 95], 00:27:50.396 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 109], 95.00th=[ 121], 00:27:50.396 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:27:50.396 | 99.99th=[ 146] 00:27:50.396 bw ( KiB/s): min= 640, max= 1386, per=4.18%, avg=752.90, stdev=154.94, samples=20 00:27:50.396 iops : min= 160, max= 346, avg=188.20, stdev=38.63, samples=20 00:27:50.396 lat (msec) : 10=0.74%, 20=2.42%, 50=1.95%, 100=76.36%, 250=18.54% 00:27:50.396 cpu : usr=35.42%, sys=2.84%, ctx=1084, majf=0, minf=1074 00:27:50.396 IO depths : 1=0.2%, 2=1.7%, 4=6.3%, 8=76.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:27:50.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.396 complete : 0=0.0%, 4=89.2%, 8=9.4%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.396 issued rwts: total=1899,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.396 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.396 filename2: (groupid=0, jobs=1): err= 0: pid=89645: Sat Sep 28 01:40:45 2024 00:27:50.396 read: IOPS=194, BW=778KiB/s (797kB/s)(7824KiB/10053msec) 00:27:50.396 slat (usec): min=4, max=12038, avg=28.91, stdev=331.53 00:27:50.396 clat (msec): min=35, max=143, avg=81.90, stdev=19.70 00:27:50.396 lat (msec): min=35, max=143, avg=81.93, stdev=19.70 00:27:50.396 clat percentiles (msec): 00:27:50.396 | 1.00th=[ 45], 5.00th=[ 55], 10.00th=[ 59], 20.00th=[ 63], 00:27:50.396 | 30.00th=[ 68], 40.00th=[ 75], 50.00th=[ 85], 60.00th=[ 89], 00:27:50.396 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 113], 00:27:50.396 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:27:50.396 | 99.99th=[ 144] 00:27:50.396 bw ( KiB/s): min= 688, max= 896, per=4.32%, avg=778.20, stdev=58.04, samples=20 00:27:50.396 iops : min= 172, max= 224, avg=194.50, stdev=14.49, samples=20 00:27:50.396 lat (msec) : 50=3.89%, 100=82.36%, 250=13.75% 00:27:50.396 cpu : usr=37.13%, sys=2.76%, ctx=1295, majf=0, minf=1073 00:27:50.396 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:27:50.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.396 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.396 issued rwts: total=1956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.396 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.396 filename2: (groupid=0, jobs=1): err= 0: pid=89646: Sat Sep 28 01:40:45 2024 00:27:50.396 read: IOPS=177, BW=710KiB/s (727kB/s)(7136KiB/10052msec) 00:27:50.396 slat (usec): min=4, max=8036, avg=26.12, stdev=268.38 00:27:50.396 clat (msec): min=49, max=195, avg=89.87, stdev=21.85 00:27:50.396 lat (msec): min=49, max=195, avg=89.90, stdev=21.86 00:27:50.396 clat percentiles (msec): 00:27:50.396 | 1.00th=[ 51], 5.00th=[ 61], 10.00th=[ 61], 20.00th=[ 71], 00:27:50.396 | 30.00th=[ 81], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 96], 00:27:50.396 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 120], 95.00th=[ 129], 00:27:50.396 | 99.00th=[ 146], 99.50th=[ 180], 99.90th=[ 197], 99.95th=[ 197], 00:27:50.396 | 99.99th=[ 197] 00:27:50.396 bw ( KiB/s): min= 384, max= 824, per=3.93%, avg=707.20, stdev=109.12, samples=20 00:27:50.396 iops : min= 96, max= 206, avg=176.80, stdev=27.28, samples=20 00:27:50.396 lat (msec) : 50=0.56%, 100=72.53%, 250=26.91% 00:27:50.396 cpu : usr=32.33%, sys=2.37%, ctx=912, majf=0, minf=1072 00:27:50.396 IO depths : 1=0.1%, 2=2.4%, 4=9.6%, 8=73.3%, 16=14.7%, 32=0.0%, >=64=0.0% 00:27:50.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.396 complete : 0=0.0%, 4=89.7%, 8=8.2%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.396 issued rwts: total=1784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.396 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.396 00:27:50.396 Run status group 0 (all jobs): 00:27:50.396 READ: bw=17.6MiB/s (18.4MB/s), 696KiB/s-788KiB/s (712kB/s-807kB/s), io=177MiB (186MB), run=10017-10097msec 00:27:50.396 ----------------------------------------------------- 00:27:50.396 Suppressions used: 00:27:50.396 count bytes template 00:27:50.396 45 402 /usr/src/fio/parse.c 00:27:50.396 1 8 libtcmalloc_minimal.so 00:27:50.396 1 904 libcrypto.so 00:27:50.396 ----------------------------------------------------- 00:27:50.396 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.396 bdev_null0 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.396 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.397 [2024-09-28 01:40:46.208381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.397 bdev_null1 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:50.397 { 00:27:50.397 "params": { 00:27:50.397 "name": "Nvme$subsystem", 00:27:50.397 "trtype": "$TEST_TRANSPORT", 00:27:50.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.397 "adrfam": "ipv4", 00:27:50.397 "trsvcid": "$NVMF_PORT", 00:27:50.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.397 "hdgst": ${hdgst:-false}, 00:27:50.397 "ddgst": ${ddgst:-false} 00:27:50.397 }, 00:27:50.397 "method": "bdev_nvme_attach_controller" 00:27:50.397 } 00:27:50.397 EOF 00:27:50.397 )") 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:50.397 { 00:27:50.397 "params": { 00:27:50.397 "name": "Nvme$subsystem", 00:27:50.397 "trtype": "$TEST_TRANSPORT", 00:27:50.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.397 "adrfam": "ipv4", 00:27:50.397 "trsvcid": "$NVMF_PORT", 00:27:50.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.397 "hdgst": ${hdgst:-false}, 00:27:50.397 "ddgst": ${ddgst:-false} 00:27:50.397 }, 00:27:50.397 "method": "bdev_nvme_attach_controller" 00:27:50.397 } 00:27:50.397 EOF 00:27:50.397 )") 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:27:50.397 "params": { 00:27:50.397 "name": "Nvme0", 00:27:50.397 "trtype": "tcp", 00:27:50.397 "traddr": "10.0.0.3", 00:27:50.397 "adrfam": "ipv4", 00:27:50.397 "trsvcid": "4420", 00:27:50.397 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:50.397 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:50.397 "hdgst": false, 00:27:50.397 "ddgst": false 00:27:50.397 }, 00:27:50.397 "method": "bdev_nvme_attach_controller" 00:27:50.397 },{ 00:27:50.397 "params": { 00:27:50.397 "name": "Nvme1", 00:27:50.397 "trtype": "tcp", 00:27:50.397 "traddr": "10.0.0.3", 00:27:50.397 "adrfam": "ipv4", 00:27:50.397 "trsvcid": "4420", 00:27:50.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:50.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:50.397 "hdgst": false, 00:27:50.397 "ddgst": false 00:27:50.397 }, 00:27:50.397 "method": "bdev_nvme_attach_controller" 00:27:50.397 }' 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:50.397 01:40:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.657 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:50.657 ... 00:27:50.657 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:50.657 ... 00:27:50.657 fio-3.35 00:27:50.657 Starting 4 threads 00:27:57.225 00:27:57.225 filename0: (groupid=0, jobs=1): err= 0: pid=89782: Sat Sep 28 01:40:52 2024 00:27:57.225 read: IOPS=1857, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5001msec) 00:27:57.225 slat (usec): min=5, max=645, avg=16.89, stdev=10.45 00:27:57.225 clat (usec): min=1080, max=8685, avg=4260.72, stdev=1264.45 00:27:57.225 lat (usec): min=1089, max=8700, avg=4277.61, stdev=1263.97 00:27:57.225 clat percentiles (usec): 00:27:57.225 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2835], 00:27:57.225 | 30.00th=[ 3326], 40.00th=[ 3752], 50.00th=[ 4686], 60.00th=[ 4883], 00:27:57.225 | 70.00th=[ 5080], 80.00th=[ 5342], 90.00th=[ 5604], 95.00th=[ 6063], 00:27:57.225 | 99.00th=[ 7111], 99.50th=[ 7308], 99.90th=[ 7767], 99.95th=[ 7832], 00:27:57.225 | 99.99th=[ 8717] 00:27:57.225 bw ( KiB/s): min=10224, max=16816, per=25.88%, avg=14663.11, stdev=2006.14, samples=9 00:27:57.225 iops : min= 1278, max= 2102, avg=1832.89, stdev=250.77, samples=9 00:27:57.225 lat (msec) : 2=0.57%, 4=40.65%, 10=58.78% 00:27:57.225 cpu : usr=90.30%, sys=8.30%, ctx=77, majf=0, minf=1075 00:27:57.225 IO depths : 1=0.1%, 2=5.7%, 4=60.6%, 8=33.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.225 complete : 0=0.0%, 4=97.8%, 8=2.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.225 issued rwts: total=9287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.225 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:57.225 filename0: (groupid=0, jobs=1): err= 0: pid=89783: Sat Sep 28 01:40:52 2024 00:27:57.225 read: IOPS=1684, BW=13.2MiB/s (13.8MB/s)(65.8MiB/5002msec) 00:27:57.225 slat (nsec): min=5372, max=64255, avg=17725.55, stdev=5930.11 00:27:57.225 clat (usec): min=1267, max=6799, avg=4689.41, stdev=997.98 00:27:57.225 lat (usec): min=1282, max=6835, avg=4707.13, stdev=997.24 00:27:57.225 clat percentiles (usec): 00:27:57.225 | 1.00th=[ 2212], 5.00th=[ 2442], 10.00th=[ 3064], 20.00th=[ 3687], 00:27:57.225 | 30.00th=[ 4490], 40.00th=[ 4817], 50.00th=[ 5080], 60.00th=[ 5211], 00:27:57.225 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5604], 95.00th=[ 5800], 00:27:57.225 | 99.00th=[ 6259], 99.50th=[ 6325], 99.90th=[ 6718], 99.95th=[ 6783], 00:27:57.225 | 99.99th=[ 6783] 00:27:57.225 bw ( KiB/s): min=12032, max=15744, per=24.09%, avg=13647.11, stdev=1518.50, samples=9 00:27:57.225 iops : min= 1504, max= 1968, avg=1705.89, stdev=189.81, samples=9 00:27:57.225 lat (msec) : 2=0.74%, 4=22.83%, 10=76.44% 00:27:57.225 cpu : usr=90.94%, sys=8.12%, ctx=7, majf=0, minf=1073 00:27:57.225 IO depths : 1=0.1%, 2=14.1%, 4=56.6%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.225 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.225 issued rwts: total=8428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.225 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:57.225 filename1: (groupid=0, jobs=1): err= 0: pid=89784: Sat Sep 28 01:40:52 2024 00:27:57.225 read: IOPS=1856, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5004msec) 00:27:57.225 slat (nsec): min=3627, max=72725, avg=14599.10, stdev=5822.22 00:27:57.225 clat (usec): min=718, max=10188, avg=4266.87, stdev=1308.54 00:27:57.225 lat (usec): min=727, max=10207, avg=4281.47, stdev=1308.50 00:27:57.225 clat percentiles (usec): 00:27:57.225 | 1.00th=[ 1549], 5.00th=[ 2057], 10.00th=[ 2540], 20.00th=[ 2835], 00:27:57.225 | 30.00th=[ 3326], 40.00th=[ 4178], 50.00th=[ 4621], 60.00th=[ 4948], 00:27:57.225 | 70.00th=[ 5211], 80.00th=[ 5538], 90.00th=[ 5669], 95.00th=[ 5932], 00:27:57.225 | 99.00th=[ 6390], 99.50th=[ 6587], 99.90th=[ 7832], 99.95th=[ 9896], 00:27:57.225 | 99.99th=[10159] 00:27:57.225 bw ( KiB/s): min=10880, max=17472, per=25.90%, avg=14670.22, stdev=2648.51, samples=9 00:27:57.225 iops : min= 1360, max= 2184, avg=1833.78, stdev=331.06, samples=9 00:27:57.225 lat (usec) : 750=0.02% 00:27:57.225 lat (msec) : 2=4.95%, 4=34.07%, 10=60.95%, 20=0.01% 00:27:57.225 cpu : usr=91.55%, sys=7.52%, ctx=9, majf=0, minf=1075 00:27:57.225 IO depths : 1=0.1%, 2=6.6%, 4=60.6%, 8=32.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.225 complete : 0=0.0%, 4=97.5%, 8=2.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.225 issued rwts: total=9291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.225 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:57.225 filename1: (groupid=0, jobs=1): err= 0: pid=89785: Sat Sep 28 01:40:52 2024 00:27:57.225 read: IOPS=1684, BW=13.2MiB/s (13.8MB/s)(65.8MiB/5003msec) 00:27:57.225 slat (nsec): min=3675, max=72228, avg=17316.15, stdev=5579.21 00:27:57.225 clat (usec): min=1251, max=7244, avg=4692.43, stdev=999.16 00:27:57.225 lat (usec): min=1265, max=7263, avg=4709.75, stdev=998.51 00:27:57.225 clat percentiles (usec): 00:27:57.225 | 1.00th=[ 2212], 5.00th=[ 2474], 10.00th=[ 3064], 20.00th=[ 3687], 00:27:57.225 | 30.00th=[ 4490], 40.00th=[ 4817], 50.00th=[ 5080], 60.00th=[ 5211], 00:27:57.225 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5604], 95.00th=[ 5800], 00:27:57.225 | 99.00th=[ 6259], 99.50th=[ 6390], 99.90th=[ 6783], 99.95th=[ 6980], 00:27:57.225 | 99.99th=[ 7242] 00:27:57.225 bw ( KiB/s): min=12032, max=15744, per=24.09%, avg=13644.44, stdev=1520.90, samples=9 00:27:57.225 iops : min= 1504, max= 1968, avg=1705.56, stdev=190.11, samples=9 00:27:57.225 lat (msec) : 2=0.74%, 4=22.84%, 10=76.42% 00:27:57.225 cpu : usr=92.14%, sys=6.96%, ctx=8, majf=0, minf=1075 00:27:57.225 IO depths : 1=0.1%, 2=14.1%, 4=56.6%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.225 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.225 issued rwts: total=8428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.225 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:57.225 00:27:57.225 Run status group 0 (all jobs): 00:27:57.225 READ: bw=55.3MiB/s (58.0MB/s), 13.2MiB/s-14.5MiB/s (13.8MB/s-15.2MB/s), io=277MiB (290MB), run=5001-5004msec 00:27:57.484 ----------------------------------------------------- 00:27:57.484 Suppressions used: 00:27:57.484 count bytes template 00:27:57.484 6 52 /usr/src/fio/parse.c 00:27:57.485 1 8 libtcmalloc_minimal.so 00:27:57.485 1 904 libcrypto.so 00:27:57.485 ----------------------------------------------------- 00:27:57.485 00:27:57.485 01:40:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:57.485 01:40:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:57.485 01:40:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:57.485 01:40:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:57.485 01:40:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:57.485 01:40:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:57.485 01:40:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.485 01:40:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.744 ************************************ 00:27:57.744 END TEST fio_dif_rand_params 00:27:57.744 ************************************ 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.744 00:27:57.744 real 0m27.042s 00:27:57.744 user 2m5.975s 00:27:57.744 sys 0m9.836s 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:57.744 01:40:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.744 01:40:53 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:57.744 01:40:53 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:57.744 01:40:53 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:57.744 01:40:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:57.744 ************************************ 00:27:57.744 START TEST fio_dif_digest 00:27:57.744 ************************************ 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:57.744 bdev_null0 00:27:57.744 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:57.745 [2024-09-28 01:40:53.524598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:57.745 { 00:27:57.745 "params": { 00:27:57.745 "name": "Nvme$subsystem", 00:27:57.745 "trtype": "$TEST_TRANSPORT", 00:27:57.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.745 "adrfam": "ipv4", 00:27:57.745 "trsvcid": "$NVMF_PORT", 00:27:57.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.745 "hdgst": ${hdgst:-false}, 00:27:57.745 "ddgst": ${ddgst:-false} 00:27:57.745 }, 00:27:57.745 "method": "bdev_nvme_attach_controller" 00:27:57.745 } 00:27:57.745 EOF 00:27:57.745 )") 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:27:57.745 "params": { 00:27:57.745 "name": "Nvme0", 00:27:57.745 "trtype": "tcp", 00:27:57.745 "traddr": "10.0.0.3", 00:27:57.745 "adrfam": "ipv4", 00:27:57.745 "trsvcid": "4420", 00:27:57.745 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:57.745 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:57.745 "hdgst": true, 00:27:57.745 "ddgst": true 00:27:57.745 }, 00:27:57.745 "method": "bdev_nvme_attach_controller" 00:27:57.745 }' 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:57.745 01:40:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:58.004 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:58.004 ... 00:27:58.004 fio-3.35 00:27:58.004 Starting 3 threads 00:28:10.213 00:28:10.213 filename0: (groupid=0, jobs=1): err= 0: pid=89895: Sat Sep 28 01:41:04 2024 00:28:10.213 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(256MiB/10007msec) 00:28:10.213 slat (nsec): min=5385, max=62339, avg=18378.67, stdev=6216.97 00:28:10.213 clat (usec): min=14083, max=18746, avg=14629.80, stdev=507.71 00:28:10.213 lat (usec): min=14091, max=18789, avg=14648.18, stdev=508.02 00:28:10.213 clat percentiles (usec): 00:28:10.213 | 1.00th=[14222], 5.00th=[14222], 10.00th=[14353], 20.00th=[14353], 00:28:10.213 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14484], 60.00th=[14484], 00:28:10.213 | 70.00th=[14615], 80.00th=[14746], 90.00th=[15008], 95.00th=[15795], 00:28:10.213 | 99.00th=[16909], 99.50th=[16909], 99.90th=[18744], 99.95th=[18744], 00:28:10.213 | 99.99th=[18744] 00:28:10.213 bw ( KiB/s): min=25344, max=26933, per=33.31%, avg=26153.05, stdev=309.50, samples=20 00:28:10.213 iops : min= 198, max= 210, avg=204.30, stdev= 2.36, samples=20 00:28:10.213 lat (msec) : 20=100.00% 00:28:10.213 cpu : usr=91.66%, sys=7.66%, ctx=14, majf=0, minf=1075 00:28:10.213 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:10.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.213 issued rwts: total=2046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:10.214 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:10.214 filename0: (groupid=0, jobs=1): err= 0: pid=89896: Sat Sep 28 01:41:04 2024 00:28:10.214 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(256MiB/10008msec) 00:28:10.214 slat (nsec): min=5346, max=68514, avg=18127.23, stdev=6472.58 00:28:10.214 clat (usec): min=14024, max=20245, avg=14633.21, stdev=530.32 00:28:10.214 lat (usec): min=14032, max=20267, avg=14651.34, stdev=530.58 00:28:10.214 clat percentiles (usec): 00:28:10.214 | 1.00th=[14091], 5.00th=[14222], 10.00th=[14353], 20.00th=[14353], 00:28:10.214 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14484], 60.00th=[14484], 00:28:10.214 | 70.00th=[14615], 80.00th=[14746], 90.00th=[15008], 95.00th=[15795], 00:28:10.214 | 99.00th=[16909], 99.50th=[16909], 99.90th=[20317], 99.95th=[20317], 00:28:10.214 | 99.99th=[20317] 00:28:10.214 bw ( KiB/s): min=25344, max=26880, per=33.31%, avg=26147.70, stdev=295.93, samples=20 00:28:10.214 iops : min= 198, max= 210, avg=204.25, stdev= 2.24, samples=20 00:28:10.214 lat (msec) : 20=99.85%, 50=0.15% 00:28:10.214 cpu : usr=91.72%, sys=7.59%, ctx=11, majf=0, minf=1076 00:28:10.214 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:10.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.214 issued rwts: total=2046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:10.214 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:10.214 filename0: (groupid=0, jobs=1): err= 0: pid=89897: Sat Sep 28 01:41:04 2024 00:28:10.214 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(256MiB/10005msec) 00:28:10.214 slat (nsec): min=5717, max=68176, avg=18676.68, stdev=6556.75 00:28:10.214 clat (usec): min=14084, max=17423, avg=14626.94, stdev=492.59 00:28:10.214 lat (usec): min=14099, max=17444, avg=14645.62, stdev=492.85 00:28:10.214 clat percentiles (usec): 00:28:10.214 | 1.00th=[14222], 5.00th=[14222], 10.00th=[14353], 20.00th=[14353], 00:28:10.214 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14484], 60.00th=[14484], 00:28:10.214 | 70.00th=[14615], 80.00th=[14615], 90.00th=[15008], 95.00th=[15795], 00:28:10.214 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17433], 99.95th=[17433], 00:28:10.214 | 99.99th=[17433] 00:28:10.214 bw ( KiB/s): min=25344, max=26933, per=33.32%, avg=26155.65, stdev=309.35, samples=20 00:28:10.214 iops : min= 198, max= 210, avg=204.30, stdev= 2.36, samples=20 00:28:10.214 lat (msec) : 20=100.00% 00:28:10.214 cpu : usr=91.83%, sys=7.49%, ctx=90, majf=0, minf=1073 00:28:10.214 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:10.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.214 issued rwts: total=2046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:10.214 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:10.214 00:28:10.214 Run status group 0 (all jobs): 00:28:10.214 READ: bw=76.7MiB/s (80.4MB/s), 25.6MiB/s-25.6MiB/s (26.8MB/s-26.8MB/s), io=767MiB (805MB), run=10005-10008msec 00:28:10.214 ----------------------------------------------------- 00:28:10.214 Suppressions used: 00:28:10.214 count bytes template 00:28:10.214 5 44 /usr/src/fio/parse.c 00:28:10.214 1 8 libtcmalloc_minimal.so 00:28:10.214 1 904 libcrypto.so 00:28:10.214 ----------------------------------------------------- 00:28:10.214 00:28:10.214 01:41:05 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:10.214 01:41:05 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:10.214 01:41:05 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:10.214 01:41:05 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:10.214 01:41:05 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:10.214 01:41:05 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:10.214 01:41:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.214 01:41:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:10.214 01:41:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.214 01:41:05 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:10.214 01:41:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.214 01:41:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:10.214 01:41:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.214 00:28:10.214 real 0m12.106s 00:28:10.214 user 0m29.262s 00:28:10.214 sys 0m2.597s 00:28:10.214 01:41:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:10.214 01:41:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:10.214 ************************************ 00:28:10.214 END TEST fio_dif_digest 00:28:10.214 ************************************ 00:28:10.214 01:41:05 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:10.214 01:41:05 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:10.214 01:41:05 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:10.214 01:41:05 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:28:10.214 01:41:05 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:10.214 01:41:05 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:28:10.214 01:41:05 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:10.214 01:41:05 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:10.214 rmmod nvme_tcp 00:28:10.214 rmmod nvme_fabrics 00:28:10.214 rmmod nvme_keyring 00:28:10.214 01:41:05 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:10.214 01:41:05 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:28:10.214 01:41:05 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:28:10.214 01:41:05 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 89135 ']' 00:28:10.214 01:41:05 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 89135 00:28:10.214 01:41:05 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 89135 ']' 00:28:10.214 01:41:05 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 89135 00:28:10.214 01:41:05 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:28:10.214 01:41:05 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:10.214 01:41:05 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89135 00:28:10.214 killing process with pid 89135 00:28:10.214 01:41:05 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:10.214 01:41:05 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:10.214 01:41:05 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89135' 00:28:10.214 01:41:05 nvmf_dif -- common/autotest_common.sh@969 -- # kill 89135 00:28:10.214 01:41:05 nvmf_dif -- common/autotest_common.sh@974 -- # wait 89135 00:28:10.782 01:41:06 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:28:10.782 01:41:06 nvmf_dif -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:11.384 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:11.384 Waiting for block devices as requested 00:28:11.384 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:11.384 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:11.384 01:41:07 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:11.384 01:41:07 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:11.384 01:41:07 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:28:11.384 01:41:07 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:28:11.384 01:41:07 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:11.384 01:41:07 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:28:11.384 01:41:07 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:11.384 01:41:07 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:11.384 01:41:07 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:11.384 01:41:07 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:11.384 01:41:07 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:11.644 01:41:07 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:11.644 01:41:07 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:11.644 01:41:07 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:11.644 01:41:07 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:11.644 01:41:07 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:11.644 01:41:07 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:11.644 01:41:07 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:11.644 01:41:07 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:11.644 01:41:07 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:11.644 01:41:07 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:11.644 01:41:07 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:11.644 01:41:07 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.644 01:41:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:11.644 01:41:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.644 01:41:07 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:28:11.644 00:28:11.644 real 1m7.996s 00:28:11.644 user 4m2.622s 00:28:11.644 sys 0m20.849s 00:28:11.644 01:41:07 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:11.644 ************************************ 00:28:11.644 END TEST nvmf_dif 00:28:11.644 ************************************ 00:28:11.644 01:41:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:11.644 01:41:07 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:11.644 01:41:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:11.644 01:41:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:11.644 01:41:07 -- common/autotest_common.sh@10 -- # set +x 00:28:11.644 ************************************ 00:28:11.644 START TEST nvmf_abort_qd_sizes 00:28:11.644 ************************************ 00:28:11.644 01:41:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:11.904 * Looking for test storage... 00:28:11.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:11.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.904 --rc genhtml_branch_coverage=1 00:28:11.904 --rc genhtml_function_coverage=1 00:28:11.904 --rc genhtml_legend=1 00:28:11.904 --rc geninfo_all_blocks=1 00:28:11.904 --rc geninfo_unexecuted_blocks=1 00:28:11.904 00:28:11.904 ' 00:28:11.904 01:41:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:11.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.904 --rc genhtml_branch_coverage=1 00:28:11.905 --rc genhtml_function_coverage=1 00:28:11.905 --rc genhtml_legend=1 00:28:11.905 --rc geninfo_all_blocks=1 00:28:11.905 --rc geninfo_unexecuted_blocks=1 00:28:11.905 00:28:11.905 ' 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:11.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.905 --rc genhtml_branch_coverage=1 00:28:11.905 --rc genhtml_function_coverage=1 00:28:11.905 --rc genhtml_legend=1 00:28:11.905 --rc geninfo_all_blocks=1 00:28:11.905 --rc geninfo_unexecuted_blocks=1 00:28:11.905 00:28:11.905 ' 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:11.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.905 --rc genhtml_branch_coverage=1 00:28:11.905 --rc genhtml_function_coverage=1 00:28:11.905 --rc genhtml_legend=1 00:28:11.905 --rc geninfo_all_blocks=1 00:28:11.905 --rc geninfo_unexecuted_blocks=1 00:28:11.905 00:28:11.905 ' 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:11.905 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@456 -- # nvmf_veth_init 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:11.905 Cannot find device "nvmf_init_br" 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:11.905 Cannot find device "nvmf_init_br2" 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:11.905 Cannot find device "nvmf_tgt_br" 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:28:11.905 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:12.165 Cannot find device "nvmf_tgt_br2" 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:12.165 Cannot find device "nvmf_init_br" 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:12.165 Cannot find device "nvmf_init_br2" 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:12.165 Cannot find device "nvmf_tgt_br" 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:12.165 Cannot find device "nvmf_tgt_br2" 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:12.165 Cannot find device "nvmf_br" 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:12.165 Cannot find device "nvmf_init_if" 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:12.165 Cannot find device "nvmf_init_if2" 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:12.165 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:12.165 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:12.165 01:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:12.165 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:12.165 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:12.165 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:12.165 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:12.165 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:12.165 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:12.165 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:12.165 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:12.165 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:12.165 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:12.165 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:12.165 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:12.165 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:12.165 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:12.165 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:12.165 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:12.425 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:12.425 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:12.425 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:12.425 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:12.425 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:12.425 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:12.425 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:12.425 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:12.425 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:12.425 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:12.425 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:12.425 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:28:12.425 00:28:12.425 --- 10.0.0.3 ping statistics --- 00:28:12.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.425 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:28:12.425 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:12.425 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:12.425 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:28:12.425 00:28:12.425 --- 10.0.0.4 ping statistics --- 00:28:12.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.425 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:28:12.425 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:12.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:28:12.425 00:28:12.425 --- 10.0.0.1 ping statistics --- 00:28:12.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.425 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:28:12.425 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:12.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:28:12.425 00:28:12.425 --- 10.0.0.2 ping statistics --- 00:28:12.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.425 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:28:12.425 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.425 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # return 0 00:28:12.425 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:28:12.425 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:12.994 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:12.994 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:13.253 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:13.253 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:13.253 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:13.253 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:13.253 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:13.253 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:13.253 01:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:13.253 01:41:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:13.253 01:41:09 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:13.253 01:41:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:13.253 01:41:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:13.253 01:41:09 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=90551 00:28:13.253 01:41:09 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:13.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.253 01:41:09 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 90551 00:28:13.253 01:41:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 90551 ']' 00:28:13.253 01:41:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.253 01:41:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:13.253 01:41:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.253 01:41:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:13.253 01:41:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:13.253 [2024-09-28 01:41:09.139193] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:28:13.253 [2024-09-28 01:41:09.139349] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.512 [2024-09-28 01:41:09.307945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:13.772 [2024-09-28 01:41:09.465513] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.772 [2024-09-28 01:41:09.465800] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.772 [2024-09-28 01:41:09.465834] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.772 [2024-09-28 01:41:09.465846] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.772 [2024-09-28 01:41:09.465857] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.772 [2024-09-28 01:41:09.466058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.772 [2024-09-28 01:41:09.466323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.772 [2024-09-28 01:41:09.466418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:13.772 [2024-09-28 01:41:09.466521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.772 [2024-09-28 01:41:09.629831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:28:14.341 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:14.342 01:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:14.342 ************************************ 00:28:14.342 START TEST spdk_target_abort 00:28:14.342 ************************************ 00:28:14.342 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:28:14.342 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:14.342 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:28:14.342 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.342 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:14.602 spdk_targetn1 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:14.602 [2024-09-28 01:41:10.297272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:14.602 [2024-09-28 01:41:10.337578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:14.602 01:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:17.892 Initializing NVMe Controllers 00:28:17.892 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:17.892 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:17.892 Initialization complete. Launching workers. 00:28:17.892 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8316, failed: 0 00:28:17.892 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1086, failed to submit 7230 00:28:17.892 success 760, unsuccessful 326, failed 0 00:28:17.892 01:41:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:17.892 01:41:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:22.083 Initializing NVMe Controllers 00:28:22.083 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:22.083 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:22.083 Initialization complete. Launching workers. 00:28:22.083 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8892, failed: 0 00:28:22.083 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1159, failed to submit 7733 00:28:22.083 success 404, unsuccessful 755, failed 0 00:28:22.083 01:41:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:22.083 01:41:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:24.620 Initializing NVMe Controllers 00:28:24.620 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:24.620 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:24.620 Initialization complete. Launching workers. 00:28:24.620 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27812, failed: 0 00:28:24.620 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2239, failed to submit 25573 00:28:24.620 success 349, unsuccessful 1890, failed 0 00:28:24.620 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:24.620 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.620 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:24.620 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.620 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:24.620 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.620 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:24.880 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.880 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 90551 00:28:24.880 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 90551 ']' 00:28:24.880 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 90551 00:28:24.880 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:28:24.880 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:24.880 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90551 00:28:24.880 killing process with pid 90551 00:28:24.880 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:24.880 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:24.880 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90551' 00:28:24.880 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 90551 00:28:24.880 01:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 90551 00:28:25.818 ************************************ 00:28:25.818 END TEST spdk_target_abort 00:28:25.818 ************************************ 00:28:25.818 00:28:25.818 real 0m11.365s 00:28:25.818 user 0m45.377s 00:28:25.818 sys 0m2.148s 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:25.818 01:41:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:25.818 01:41:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:25.818 01:41:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:25.818 01:41:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:25.818 ************************************ 00:28:25.818 START TEST kernel_target_abort 00:28:25.818 ************************************ 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:25.818 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:25.819 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:25.819 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:28:25.819 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:28:25.819 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:28:25.819 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:25.819 01:41:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:26.078 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:26.337 Waiting for block devices as requested 00:28:26.337 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:26.337 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:26.903 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:28:26.903 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:26.903 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:28:26.903 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:26.903 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:26.903 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:26.903 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:28:26.903 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:26.903 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:26.903 No valid GPT data, bailing 00:28:26.903 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:26.904 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:26.904 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:26.904 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:28:26.904 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:28:26.904 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:26.904 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:28:26.904 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:28:26.904 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:26.904 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:26.904 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:28:26.904 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:28:26.904 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:27.162 No valid GPT data, bailing 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:27.162 No valid GPT data, bailing 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:27.162 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:28:27.163 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:28:27.163 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:27.163 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:27.163 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:28:27.163 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:27.163 01:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:27.163 No valid GPT data, bailing 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:27.163 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 --hostid=f8eaa80b-beb5-4887-8952-726ced1ba196 -a 10.0.0.1 -t tcp -s 4420 00:28:27.421 00:28:27.421 Discovery Log Number of Records 2, Generation counter 2 00:28:27.421 =====Discovery Log Entry 0====== 00:28:27.421 trtype: tcp 00:28:27.421 adrfam: ipv4 00:28:27.421 subtype: current discovery subsystem 00:28:27.421 treq: not specified, sq flow control disable supported 00:28:27.421 portid: 1 00:28:27.421 trsvcid: 4420 00:28:27.421 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:27.421 traddr: 10.0.0.1 00:28:27.421 eflags: none 00:28:27.421 sectype: none 00:28:27.421 =====Discovery Log Entry 1====== 00:28:27.421 trtype: tcp 00:28:27.421 adrfam: ipv4 00:28:27.421 subtype: nvme subsystem 00:28:27.421 treq: not specified, sq flow control disable supported 00:28:27.421 portid: 1 00:28:27.421 trsvcid: 4420 00:28:27.421 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:27.421 traddr: 10.0.0.1 00:28:27.422 eflags: none 00:28:27.422 sectype: none 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:27.422 01:41:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:30.708 Initializing NVMe Controllers 00:28:30.708 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:30.708 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:30.708 Initialization complete. Launching workers. 00:28:30.708 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 25064, failed: 0 00:28:30.708 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25064, failed to submit 0 00:28:30.708 success 0, unsuccessful 25064, failed 0 00:28:30.708 01:41:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:30.708 01:41:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:33.996 Initializing NVMe Controllers 00:28:33.996 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:33.996 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:33.996 Initialization complete. Launching workers. 00:28:33.996 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55505, failed: 0 00:28:33.996 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22302, failed to submit 33203 00:28:33.996 success 0, unsuccessful 22302, failed 0 00:28:33.996 01:41:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:33.996 01:41:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:37.287 Initializing NVMe Controllers 00:28:37.287 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:37.287 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:37.287 Initialization complete. Launching workers. 00:28:37.287 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 59403, failed: 0 00:28:37.287 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14810, failed to submit 44593 00:28:37.287 success 0, unsuccessful 14810, failed 0 00:28:37.287 01:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:37.287 01:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:37.287 01:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:28:37.287 01:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:37.287 01:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:37.287 01:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:37.287 01:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:37.287 01:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:28:37.287 01:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:28:37.287 01:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:37.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:38.423 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:38.423 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:38.423 00:28:38.423 real 0m12.583s 00:28:38.423 user 0m6.134s 00:28:38.423 sys 0m4.034s 00:28:38.423 01:41:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:38.423 01:41:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:38.423 ************************************ 00:28:38.423 END TEST kernel_target_abort 00:28:38.423 ************************************ 00:28:38.423 01:41:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:38.423 01:41:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:38.423 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:38.423 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:28:38.423 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:38.423 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:28:38.423 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:38.423 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:38.424 rmmod nvme_tcp 00:28:38.424 rmmod nvme_fabrics 00:28:38.424 rmmod nvme_keyring 00:28:38.424 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:38.424 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:28:38.424 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:28:38.424 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 90551 ']' 00:28:38.424 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 90551 00:28:38.424 01:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 90551 ']' 00:28:38.424 01:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 90551 00:28:38.424 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (90551) - No such process 00:28:38.424 Process with pid 90551 is not found 00:28:38.424 01:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 90551 is not found' 00:28:38.424 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:28:38.424 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:38.992 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:38.992 Waiting for block devices as requested 00:28:38.992 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:38.992 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:39.274 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:39.274 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:39.274 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:28:39.274 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:28:39.274 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:39.274 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:28:39.274 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:39.274 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:39.274 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:39.274 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:39.274 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:39.274 01:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:39.275 01:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:39.275 01:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:39.275 01:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:39.275 01:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:39.275 01:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:39.275 01:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:39.275 01:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:39.275 01:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:39.275 01:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:39.275 01:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:39.275 01:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.275 01:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:39.275 01:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.542 01:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:28:39.542 00:28:39.542 real 0m27.629s 00:28:39.542 user 0m52.865s 00:28:39.542 sys 0m7.588s 00:28:39.542 01:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:39.542 01:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:39.542 ************************************ 00:28:39.542 END TEST nvmf_abort_qd_sizes 00:28:39.542 ************************************ 00:28:39.542 01:41:35 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:39.542 01:41:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:39.542 01:41:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:39.542 01:41:35 -- common/autotest_common.sh@10 -- # set +x 00:28:39.542 ************************************ 00:28:39.542 START TEST keyring_file 00:28:39.542 ************************************ 00:28:39.542 01:41:35 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:39.542 * Looking for test storage... 00:28:39.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:39.542 01:41:35 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:39.542 01:41:35 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:28:39.542 01:41:35 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:39.542 01:41:35 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@345 -- # : 1 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@353 -- # local d=1 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@355 -- # echo 1 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@353 -- # local d=2 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@355 -- # echo 2 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.542 01:41:35 keyring_file -- scripts/common.sh@368 -- # return 0 00:28:39.542 01:41:35 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.543 01:41:35 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:39.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.543 --rc genhtml_branch_coverage=1 00:28:39.543 --rc genhtml_function_coverage=1 00:28:39.543 --rc genhtml_legend=1 00:28:39.543 --rc geninfo_all_blocks=1 00:28:39.543 --rc geninfo_unexecuted_blocks=1 00:28:39.543 00:28:39.543 ' 00:28:39.543 01:41:35 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:39.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.543 --rc genhtml_branch_coverage=1 00:28:39.543 --rc genhtml_function_coverage=1 00:28:39.543 --rc genhtml_legend=1 00:28:39.543 --rc geninfo_all_blocks=1 00:28:39.543 --rc geninfo_unexecuted_blocks=1 00:28:39.543 00:28:39.543 ' 00:28:39.543 01:41:35 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:39.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.543 --rc genhtml_branch_coverage=1 00:28:39.543 --rc genhtml_function_coverage=1 00:28:39.543 --rc genhtml_legend=1 00:28:39.543 --rc geninfo_all_blocks=1 00:28:39.543 --rc geninfo_unexecuted_blocks=1 00:28:39.543 00:28:39.543 ' 00:28:39.543 01:41:35 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:39.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.543 --rc genhtml_branch_coverage=1 00:28:39.543 --rc genhtml_function_coverage=1 00:28:39.543 --rc genhtml_legend=1 00:28:39.543 --rc geninfo_all_blocks=1 00:28:39.543 --rc geninfo_unexecuted_blocks=1 00:28:39.543 00:28:39.543 ' 00:28:39.543 01:41:35 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:39.543 01:41:35 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:39.543 01:41:35 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.543 01:41:35 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.543 01:41:35 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.543 01:41:35 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.543 01:41:35 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.543 01:41:35 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.543 01:41:35 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.543 01:41:35 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:39.543 01:41:35 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@51 -- # : 0 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:39.543 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:39.543 01:41:35 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:39.543 01:41:35 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:39.543 01:41:35 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:39.543 01:41:35 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:39.543 01:41:35 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:39.543 01:41:35 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:39.543 01:41:35 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:39.543 01:41:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:39.543 01:41:35 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:39.543 01:41:35 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:39.543 01:41:35 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:39.543 01:41:35 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:39.543 01:41:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NIgBPUHpnZ 00:28:39.543 01:41:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:28:39.543 01:41:35 keyring_file -- nvmf/common.sh@729 -- # python - 00:28:39.803 01:41:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NIgBPUHpnZ 00:28:39.803 01:41:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NIgBPUHpnZ 00:28:39.803 01:41:35 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.NIgBPUHpnZ 00:28:39.803 01:41:35 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:39.803 01:41:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:39.803 01:41:35 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:39.803 01:41:35 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:39.803 01:41:35 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:39.803 01:41:35 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:39.803 01:41:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.e6WCE5iSZ9 00:28:39.803 01:41:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:39.803 01:41:35 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:39.803 01:41:35 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:28:39.803 01:41:35 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:28:39.803 01:41:35 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:28:39.803 01:41:35 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:28:39.803 01:41:35 keyring_file -- nvmf/common.sh@729 -- # python - 00:28:39.803 01:41:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.e6WCE5iSZ9 00:28:39.803 01:41:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.e6WCE5iSZ9 00:28:39.803 01:41:35 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.e6WCE5iSZ9 00:28:39.803 01:41:35 keyring_file -- keyring/file.sh@30 -- # tgtpid=91669 00:28:39.803 01:41:35 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:39.803 01:41:35 keyring_file -- keyring/file.sh@32 -- # waitforlisten 91669 00:28:39.803 01:41:35 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 91669 ']' 00:28:39.803 01:41:35 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.803 01:41:35 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:39.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.803 01:41:35 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.803 01:41:35 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:39.803 01:41:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:39.803 [2024-09-28 01:41:35.715195] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:28:39.803 [2024-09-28 01:41:35.715414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91669 ] 00:28:40.062 [2024-09-28 01:41:35.886774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.320 [2024-09-28 01:41:36.115111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.578 [2024-09-28 01:41:36.306278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:40.837 01:41:36 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:40.837 01:41:36 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:28:40.837 01:41:36 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:40.837 01:41:36 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.837 01:41:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:40.837 [2024-09-28 01:41:36.736232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.837 null0 00:28:40.837 [2024-09-28 01:41:36.768310] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:40.837 [2024-09-28 01:41:36.768651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.097 01:41:36 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:41.097 [2024-09-28 01:41:36.796215] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:41.097 request: 00:28:41.097 { 00:28:41.097 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:41.097 "secure_channel": false, 00:28:41.097 "listen_address": { 00:28:41.097 "trtype": "tcp", 00:28:41.097 "traddr": "127.0.0.1", 00:28:41.097 "trsvcid": "4420" 00:28:41.097 }, 00:28:41.097 "method": "nvmf_subsystem_add_listener", 00:28:41.097 "req_id": 1 00:28:41.097 } 00:28:41.097 Got JSON-RPC error response 00:28:41.097 response: 00:28:41.097 { 00:28:41.097 "code": -32602, 00:28:41.097 "message": "Invalid parameters" 00:28:41.097 } 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:41.097 01:41:36 keyring_file -- keyring/file.sh@47 -- # bperfpid=91685 00:28:41.097 01:41:36 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:41.097 01:41:36 keyring_file -- keyring/file.sh@49 -- # waitforlisten 91685 /var/tmp/bperf.sock 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 91685 ']' 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:41.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:41.097 01:41:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:41.097 [2024-09-28 01:41:36.889527] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:28:41.097 [2024-09-28 01:41:36.889686] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91685 ] 00:28:41.356 [2024-09-28 01:41:37.048077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.356 [2024-09-28 01:41:37.267308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.615 [2024-09-28 01:41:37.412242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:42.180 01:41:37 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:42.180 01:41:37 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:28:42.180 01:41:37 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NIgBPUHpnZ 00:28:42.180 01:41:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NIgBPUHpnZ 00:28:42.438 01:41:38 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.e6WCE5iSZ9 00:28:42.438 01:41:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.e6WCE5iSZ9 00:28:42.697 01:41:38 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:28:42.697 01:41:38 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:42.697 01:41:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:42.697 01:41:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:42.697 01:41:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:42.956 01:41:38 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.NIgBPUHpnZ == \/\t\m\p\/\t\m\p\.\N\I\g\B\P\U\H\p\n\Z ]] 00:28:42.956 01:41:38 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:28:42.956 01:41:38 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:28:42.956 01:41:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:42.956 01:41:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:42.956 01:41:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:43.215 01:41:38 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.e6WCE5iSZ9 == \/\t\m\p\/\t\m\p\.\e\6\W\C\E\5\i\S\Z\9 ]] 00:28:43.215 01:41:38 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:28:43.215 01:41:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:43.215 01:41:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:43.215 01:41:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:43.215 01:41:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:43.215 01:41:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:43.473 01:41:39 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:43.473 01:41:39 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:28:43.473 01:41:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:43.473 01:41:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:43.474 01:41:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:43.474 01:41:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:43.474 01:41:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:43.732 01:41:39 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:28:43.732 01:41:39 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:43.732 01:41:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:43.991 [2024-09-28 01:41:39.691901] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:43.991 nvme0n1 00:28:43.991 01:41:39 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:28:43.991 01:41:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:43.991 01:41:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:43.991 01:41:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:43.991 01:41:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:43.991 01:41:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:44.250 01:41:40 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:28:44.250 01:41:40 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:28:44.250 01:41:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:44.250 01:41:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:44.250 01:41:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:44.250 01:41:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:44.250 01:41:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:44.509 01:41:40 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:28:44.509 01:41:40 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:44.767 Running I/O for 1 seconds... 00:28:45.703 9397.00 IOPS, 36.71 MiB/s 00:28:45.703 Latency(us) 00:28:45.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.703 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:45.703 nvme0n1 : 1.01 9445.07 36.89 0.00 0.00 13496.99 4647.10 18826.71 00:28:45.703 =================================================================================================================== 00:28:45.703 Total : 9445.07 36.89 0.00 0.00 13496.99 4647.10 18826.71 00:28:45.703 { 00:28:45.703 "results": [ 00:28:45.703 { 00:28:45.703 "job": "nvme0n1", 00:28:45.703 "core_mask": "0x2", 00:28:45.703 "workload": "randrw", 00:28:45.703 "percentage": 50, 00:28:45.703 "status": "finished", 00:28:45.703 "queue_depth": 128, 00:28:45.703 "io_size": 4096, 00:28:45.703 "runtime": 1.008674, 00:28:45.703 "iops": 9445.073433041796, 00:28:45.703 "mibps": 36.89481809781952, 00:28:45.703 "io_failed": 0, 00:28:45.703 "io_timeout": 0, 00:28:45.703 "avg_latency_us": 13496.994022347966, 00:28:45.703 "min_latency_us": 4647.098181818182, 00:28:45.703 "max_latency_us": 18826.705454545456 00:28:45.703 } 00:28:45.703 ], 00:28:45.703 "core_count": 1 00:28:45.703 } 00:28:45.703 01:41:41 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:45.703 01:41:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:45.962 01:41:41 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:28:45.962 01:41:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:45.962 01:41:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:45.962 01:41:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:45.962 01:41:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:45.962 01:41:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:46.221 01:41:42 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:46.221 01:41:42 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:28:46.221 01:41:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:46.221 01:41:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:46.221 01:41:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:46.221 01:41:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:46.221 01:41:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:46.479 01:41:42 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:28:46.479 01:41:42 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:46.479 01:41:42 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:28:46.480 01:41:42 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:46.480 01:41:42 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:28:46.480 01:41:42 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:46.480 01:41:42 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:28:46.480 01:41:42 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:46.480 01:41:42 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:46.480 01:41:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:46.739 [2024-09-28 01:41:42.455737] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:46.739 [2024-09-28 01:41:42.455931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:28:46.739 [2024-09-28 01:41:42.456905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:28:46.739 [2024-09-28 01:41:42.457897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:46.739 [2024-09-28 01:41:42.457927] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:46.739 [2024-09-28 01:41:42.457958] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:28:46.739 [2024-09-28 01:41:42.458007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:46.739 request: 00:28:46.739 { 00:28:46.739 "name": "nvme0", 00:28:46.739 "trtype": "tcp", 00:28:46.739 "traddr": "127.0.0.1", 00:28:46.739 "adrfam": "ipv4", 00:28:46.739 "trsvcid": "4420", 00:28:46.739 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:46.739 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:46.739 "prchk_reftag": false, 00:28:46.739 "prchk_guard": false, 00:28:46.739 "hdgst": false, 00:28:46.739 "ddgst": false, 00:28:46.739 "psk": "key1", 00:28:46.739 "allow_unrecognized_csi": false, 00:28:46.739 "method": "bdev_nvme_attach_controller", 00:28:46.739 "req_id": 1 00:28:46.739 } 00:28:46.739 Got JSON-RPC error response 00:28:46.739 response: 00:28:46.739 { 00:28:46.739 "code": -5, 00:28:46.739 "message": "Input/output error" 00:28:46.739 } 00:28:46.739 01:41:42 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:28:46.739 01:41:42 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:46.739 01:41:42 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:46.739 01:41:42 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:46.739 01:41:42 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:28:46.739 01:41:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:46.739 01:41:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:46.739 01:41:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:46.739 01:41:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:46.739 01:41:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:46.997 01:41:42 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:46.997 01:41:42 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:28:46.997 01:41:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:46.997 01:41:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:46.997 01:41:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:46.997 01:41:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:46.997 01:41:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:47.256 01:41:43 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:28:47.256 01:41:43 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:28:47.256 01:41:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:47.515 01:41:43 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:28:47.515 01:41:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:47.774 01:41:43 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:28:47.774 01:41:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:47.774 01:41:43 keyring_file -- keyring/file.sh@78 -- # jq length 00:28:48.032 01:41:43 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:28:48.032 01:41:43 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.NIgBPUHpnZ 00:28:48.032 01:41:43 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.NIgBPUHpnZ 00:28:48.032 01:41:43 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:28:48.032 01:41:43 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.NIgBPUHpnZ 00:28:48.032 01:41:43 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:28:48.032 01:41:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:48.032 01:41:43 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:28:48.032 01:41:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:48.032 01:41:43 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NIgBPUHpnZ 00:28:48.032 01:41:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NIgBPUHpnZ 00:28:48.291 [2024-09-28 01:41:44.008640] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NIgBPUHpnZ': 0100660 00:28:48.291 [2024-09-28 01:41:44.008686] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:48.291 request: 00:28:48.291 { 00:28:48.291 "name": "key0", 00:28:48.291 "path": "/tmp/tmp.NIgBPUHpnZ", 00:28:48.291 "method": "keyring_file_add_key", 00:28:48.291 "req_id": 1 00:28:48.291 } 00:28:48.291 Got JSON-RPC error response 00:28:48.291 response: 00:28:48.291 { 00:28:48.291 "code": -1, 00:28:48.291 "message": "Operation not permitted" 00:28:48.291 } 00:28:48.291 01:41:44 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:28:48.291 01:41:44 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:48.291 01:41:44 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:48.291 01:41:44 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:48.291 01:41:44 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.NIgBPUHpnZ 00:28:48.291 01:41:44 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NIgBPUHpnZ 00:28:48.291 01:41:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NIgBPUHpnZ 00:28:48.549 01:41:44 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.NIgBPUHpnZ 00:28:48.549 01:41:44 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:28:48.549 01:41:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:48.549 01:41:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:48.549 01:41:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:48.549 01:41:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:48.549 01:41:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:48.549 01:41:44 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:28:48.549 01:41:44 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:48.549 01:41:44 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:28:48.549 01:41:44 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:48.549 01:41:44 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:28:48.549 01:41:44 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:48.549 01:41:44 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:28:48.549 01:41:44 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:48.550 01:41:44 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:48.550 01:41:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:48.808 [2024-09-28 01:41:44.676885] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.NIgBPUHpnZ': No such file or directory 00:28:48.808 [2024-09-28 01:41:44.676952] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:48.808 [2024-09-28 01:41:44.676978] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:48.808 [2024-09-28 01:41:44.676991] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:28:48.808 [2024-09-28 01:41:44.677008] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:48.808 [2024-09-28 01:41:44.677020] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:48.808 request: 00:28:48.808 { 00:28:48.808 "name": "nvme0", 00:28:48.808 "trtype": "tcp", 00:28:48.808 "traddr": "127.0.0.1", 00:28:48.808 "adrfam": "ipv4", 00:28:48.808 "trsvcid": "4420", 00:28:48.808 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:48.808 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:48.808 "prchk_reftag": false, 00:28:48.808 "prchk_guard": false, 00:28:48.808 "hdgst": false, 00:28:48.808 "ddgst": false, 00:28:48.808 "psk": "key0", 00:28:48.808 "allow_unrecognized_csi": false, 00:28:48.808 "method": "bdev_nvme_attach_controller", 00:28:48.808 "req_id": 1 00:28:48.808 } 00:28:48.808 Got JSON-RPC error response 00:28:48.808 response: 00:28:48.808 { 00:28:48.808 "code": -19, 00:28:48.808 "message": "No such device" 00:28:48.808 } 00:28:48.808 01:41:44 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:28:48.808 01:41:44 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:48.808 01:41:44 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:48.808 01:41:44 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:48.808 01:41:44 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:28:48.808 01:41:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:49.066 01:41:44 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:49.066 01:41:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:49.066 01:41:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:49.066 01:41:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:49.066 01:41:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:49.066 01:41:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:49.066 01:41:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.aHojX0jd5T 00:28:49.066 01:41:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:49.067 01:41:44 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:49.067 01:41:44 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:28:49.067 01:41:44 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:28:49.067 01:41:44 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:28:49.067 01:41:44 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:28:49.067 01:41:44 keyring_file -- nvmf/common.sh@729 -- # python - 00:28:49.067 01:41:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.aHojX0jd5T 00:28:49.067 01:41:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.aHojX0jd5T 00:28:49.067 01:41:44 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.aHojX0jd5T 00:28:49.067 01:41:44 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aHojX0jd5T 00:28:49.067 01:41:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aHojX0jd5T 00:28:49.325 01:41:45 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:49.325 01:41:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:49.892 nvme0n1 00:28:49.892 01:41:45 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:28:49.892 01:41:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:49.892 01:41:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:49.892 01:41:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:49.892 01:41:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:49.892 01:41:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:49.892 01:41:45 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:28:49.892 01:41:45 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:28:49.892 01:41:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:50.149 01:41:46 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:28:50.149 01:41:46 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:28:50.149 01:41:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:50.149 01:41:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:50.149 01:41:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:50.407 01:41:46 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:28:50.407 01:41:46 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:28:50.407 01:41:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:50.407 01:41:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:50.407 01:41:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:50.407 01:41:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:50.407 01:41:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:50.665 01:41:46 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:28:50.665 01:41:46 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:50.665 01:41:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:50.924 01:41:46 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:28:50.924 01:41:46 keyring_file -- keyring/file.sh@105 -- # jq length 00:28:50.924 01:41:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:51.182 01:41:47 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:28:51.182 01:41:47 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aHojX0jd5T 00:28:51.182 01:41:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aHojX0jd5T 00:28:51.441 01:41:47 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.e6WCE5iSZ9 00:28:51.441 01:41:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.e6WCE5iSZ9 00:28:51.699 01:41:47 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:51.699 01:41:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:51.957 nvme0n1 00:28:51.957 01:41:47 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:28:51.957 01:41:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:52.216 01:41:48 keyring_file -- keyring/file.sh@113 -- # config='{ 00:28:52.216 "subsystems": [ 00:28:52.216 { 00:28:52.216 "subsystem": "keyring", 00:28:52.216 "config": [ 00:28:52.216 { 00:28:52.216 "method": "keyring_file_add_key", 00:28:52.216 "params": { 00:28:52.216 "name": "key0", 00:28:52.216 "path": "/tmp/tmp.aHojX0jd5T" 00:28:52.216 } 00:28:52.216 }, 00:28:52.216 { 00:28:52.216 "method": "keyring_file_add_key", 00:28:52.216 "params": { 00:28:52.216 "name": "key1", 00:28:52.216 "path": "/tmp/tmp.e6WCE5iSZ9" 00:28:52.216 } 00:28:52.216 } 00:28:52.216 ] 00:28:52.216 }, 00:28:52.216 { 00:28:52.216 "subsystem": "iobuf", 00:28:52.216 "config": [ 00:28:52.216 { 00:28:52.216 "method": "iobuf_set_options", 00:28:52.216 "params": { 00:28:52.216 "small_pool_count": 8192, 00:28:52.216 "large_pool_count": 1024, 00:28:52.216 "small_bufsize": 8192, 00:28:52.216 "large_bufsize": 135168 00:28:52.216 } 00:28:52.216 } 00:28:52.216 ] 00:28:52.216 }, 00:28:52.216 { 00:28:52.216 "subsystem": "sock", 00:28:52.216 "config": [ 00:28:52.216 { 00:28:52.216 "method": "sock_set_default_impl", 00:28:52.216 "params": { 00:28:52.216 "impl_name": "uring" 00:28:52.216 } 00:28:52.216 }, 00:28:52.216 { 00:28:52.216 "method": "sock_impl_set_options", 00:28:52.216 "params": { 00:28:52.216 "impl_name": "ssl", 00:28:52.216 "recv_buf_size": 4096, 00:28:52.216 "send_buf_size": 4096, 00:28:52.216 "enable_recv_pipe": true, 00:28:52.216 "enable_quickack": false, 00:28:52.216 "enable_placement_id": 0, 00:28:52.216 "enable_zerocopy_send_server": true, 00:28:52.216 "enable_zerocopy_send_client": false, 00:28:52.216 "zerocopy_threshold": 0, 00:28:52.216 "tls_version": 0, 00:28:52.216 "enable_ktls": false 00:28:52.216 } 00:28:52.216 }, 00:28:52.216 { 00:28:52.216 "method": "sock_impl_set_options", 00:28:52.216 "params": { 00:28:52.216 "impl_name": "posix", 00:28:52.216 "recv_buf_size": 2097152, 00:28:52.216 "send_buf_size": 2097152, 00:28:52.216 "enable_recv_pipe": true, 00:28:52.216 "enable_quickack": false, 00:28:52.216 "enable_placement_id": 0, 00:28:52.216 "enable_zerocopy_send_server": true, 00:28:52.216 "enable_zerocopy_send_client": false, 00:28:52.216 "zerocopy_threshold": 0, 00:28:52.216 "tls_version": 0, 00:28:52.216 "enable_ktls": false 00:28:52.216 } 00:28:52.216 }, 00:28:52.216 { 00:28:52.216 "method": "sock_impl_set_options", 00:28:52.216 "params": { 00:28:52.216 "impl_name": "uring", 00:28:52.216 "recv_buf_size": 2097152, 00:28:52.216 "send_buf_size": 2097152, 00:28:52.216 "enable_recv_pipe": true, 00:28:52.216 "enable_quickack": false, 00:28:52.216 "enable_placement_id": 0, 00:28:52.216 "enable_zerocopy_send_server": false, 00:28:52.216 "enable_zerocopy_send_client": false, 00:28:52.216 "zerocopy_threshold": 0, 00:28:52.216 "tls_version": 0, 00:28:52.216 "enable_ktls": false 00:28:52.216 } 00:28:52.216 } 00:28:52.216 ] 00:28:52.216 }, 00:28:52.216 { 00:28:52.216 "subsystem": "vmd", 00:28:52.216 "config": [] 00:28:52.216 }, 00:28:52.216 { 00:28:52.216 "subsystem": "accel", 00:28:52.216 "config": [ 00:28:52.216 { 00:28:52.216 "method": "accel_set_options", 00:28:52.216 "params": { 00:28:52.216 "small_cache_size": 128, 00:28:52.216 "large_cache_size": 16, 00:28:52.216 "task_count": 2048, 00:28:52.216 "sequence_count": 2048, 00:28:52.216 "buf_count": 2048 00:28:52.216 } 00:28:52.216 } 00:28:52.216 ] 00:28:52.216 }, 00:28:52.216 { 00:28:52.216 "subsystem": "bdev", 00:28:52.216 "config": [ 00:28:52.216 { 00:28:52.216 "method": "bdev_set_options", 00:28:52.216 "params": { 00:28:52.216 "bdev_io_pool_size": 65535, 00:28:52.216 "bdev_io_cache_size": 256, 00:28:52.216 "bdev_auto_examine": true, 00:28:52.216 "iobuf_small_cache_size": 128, 00:28:52.216 "iobuf_large_cache_size": 16 00:28:52.216 } 00:28:52.216 }, 00:28:52.216 { 00:28:52.216 "method": "bdev_raid_set_options", 00:28:52.216 "params": { 00:28:52.216 "process_window_size_kb": 1024, 00:28:52.216 "process_max_bandwidth_mb_sec": 0 00:28:52.216 } 00:28:52.216 }, 00:28:52.216 { 00:28:52.216 "method": "bdev_iscsi_set_options", 00:28:52.216 "params": { 00:28:52.216 "timeout_sec": 30 00:28:52.216 } 00:28:52.216 }, 00:28:52.216 { 00:28:52.216 "method": "bdev_nvme_set_options", 00:28:52.216 "params": { 00:28:52.216 "action_on_timeout": "none", 00:28:52.216 "timeout_us": 0, 00:28:52.216 "timeout_admin_us": 0, 00:28:52.216 "keep_alive_timeout_ms": 10000, 00:28:52.216 "arbitration_burst": 0, 00:28:52.216 "low_priority_weight": 0, 00:28:52.216 "medium_priority_weight": 0, 00:28:52.216 "high_priority_weight": 0, 00:28:52.216 "nvme_adminq_poll_period_us": 10000, 00:28:52.216 "nvme_ioq_poll_period_us": 0, 00:28:52.216 "io_queue_requests": 512, 00:28:52.216 "delay_cmd_submit": true, 00:28:52.216 "transport_retry_count": 4, 00:28:52.216 "bdev_retry_count": 3, 00:28:52.216 "transport_ack_timeout": 0, 00:28:52.216 "ctrlr_loss_timeout_sec": 0, 00:28:52.216 "reconnect_delay_sec": 0, 00:28:52.216 "fast_io_fail_timeout_sec": 0, 00:28:52.216 "disable_auto_failback": false, 00:28:52.216 "generate_uuids": false, 00:28:52.216 "transport_tos": 0, 00:28:52.216 "nvme_error_stat": false, 00:28:52.216 "rdma_srq_size": 0, 00:28:52.216 "io_path_stat": false, 00:28:52.216 "allow_accel_sequence": false, 00:28:52.216 "rdma_max_cq_size": 0, 00:28:52.216 "rdma_cm_event_timeout_ms": 0, 00:28:52.216 "dhchap_digests": [ 00:28:52.216 "sha256", 00:28:52.216 "sha384", 00:28:52.216 "sha512" 00:28:52.216 ], 00:28:52.216 "dhchap_dhgroups": [ 00:28:52.216 "null", 00:28:52.216 "ffdhe2048", 00:28:52.216 "ffdhe3072", 00:28:52.216 "ffdhe4096", 00:28:52.216 "ffdhe6144", 00:28:52.216 "ffdhe8192" 00:28:52.216 ] 00:28:52.216 } 00:28:52.216 }, 00:28:52.216 { 00:28:52.216 "method": "bdev_nvme_attach_controller", 00:28:52.216 "params": { 00:28:52.216 "name": "nvme0", 00:28:52.216 "trtype": "TCP", 00:28:52.216 "adrfam": "IPv4", 00:28:52.216 "traddr": "127.0.0.1", 00:28:52.216 "trsvcid": "4420", 00:28:52.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:52.216 "prchk_reftag": false, 00:28:52.216 "prchk_guard": false, 00:28:52.216 "ctrlr_loss_timeout_sec": 0, 00:28:52.216 "reconnect_delay_sec": 0, 00:28:52.217 "fast_io_fail_timeout_sec": 0, 00:28:52.217 "psk": "key0", 00:28:52.217 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:52.217 "hdgst": false, 00:28:52.217 "ddgst": false 00:28:52.217 } 00:28:52.217 }, 00:28:52.217 { 00:28:52.217 "method": "bdev_nvme_set_hotplug", 00:28:52.217 "params": { 00:28:52.217 "period_us": 100000, 00:28:52.217 "enable": false 00:28:52.217 } 00:28:52.217 }, 00:28:52.217 { 00:28:52.217 "method": "bdev_wait_for_examine" 00:28:52.217 } 00:28:52.217 ] 00:28:52.217 }, 00:28:52.217 { 00:28:52.217 "subsystem": "nbd", 00:28:52.217 "config": [] 00:28:52.217 } 00:28:52.217 ] 00:28:52.217 }' 00:28:52.217 01:41:48 keyring_file -- keyring/file.sh@115 -- # killprocess 91685 00:28:52.217 01:41:48 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 91685 ']' 00:28:52.217 01:41:48 keyring_file -- common/autotest_common.sh@954 -- # kill -0 91685 00:28:52.217 01:41:48 keyring_file -- common/autotest_common.sh@955 -- # uname 00:28:52.217 01:41:48 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:52.217 01:41:48 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91685 00:28:52.475 01:41:48 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:52.475 killing process with pid 91685 00:28:52.475 Received shutdown signal, test time was about 1.000000 seconds 00:28:52.475 00:28:52.475 Latency(us) 00:28:52.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.475 =================================================================================================================== 00:28:52.475 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:52.475 01:41:48 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:52.475 01:41:48 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91685' 00:28:52.475 01:41:48 keyring_file -- common/autotest_common.sh@969 -- # kill 91685 00:28:52.475 01:41:48 keyring_file -- common/autotest_common.sh@974 -- # wait 91685 00:28:53.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:53.411 01:41:49 keyring_file -- keyring/file.sh@118 -- # bperfpid=91942 00:28:53.411 01:41:49 keyring_file -- keyring/file.sh@120 -- # waitforlisten 91942 /var/tmp/bperf.sock 00:28:53.411 01:41:49 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 91942 ']' 00:28:53.411 01:41:49 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:53.411 01:41:49 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:53.411 01:41:49 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:53.411 01:41:49 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:53.411 01:41:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:53.411 01:41:49 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:53.411 01:41:49 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:28:53.411 "subsystems": [ 00:28:53.411 { 00:28:53.411 "subsystem": "keyring", 00:28:53.411 "config": [ 00:28:53.411 { 00:28:53.411 "method": "keyring_file_add_key", 00:28:53.411 "params": { 00:28:53.411 "name": "key0", 00:28:53.411 "path": "/tmp/tmp.aHojX0jd5T" 00:28:53.411 } 00:28:53.411 }, 00:28:53.411 { 00:28:53.411 "method": "keyring_file_add_key", 00:28:53.411 "params": { 00:28:53.411 "name": "key1", 00:28:53.411 "path": "/tmp/tmp.e6WCE5iSZ9" 00:28:53.411 } 00:28:53.411 } 00:28:53.411 ] 00:28:53.411 }, 00:28:53.411 { 00:28:53.411 "subsystem": "iobuf", 00:28:53.411 "config": [ 00:28:53.411 { 00:28:53.411 "method": "iobuf_set_options", 00:28:53.411 "params": { 00:28:53.411 "small_pool_count": 8192, 00:28:53.411 "large_pool_count": 1024, 00:28:53.411 "small_bufsize": 8192, 00:28:53.411 "large_bufsize": 135168 00:28:53.411 } 00:28:53.411 } 00:28:53.411 ] 00:28:53.411 }, 00:28:53.411 { 00:28:53.411 "subsystem": "sock", 00:28:53.411 "config": [ 00:28:53.411 { 00:28:53.411 "method": "sock_set_default_impl", 00:28:53.411 "params": { 00:28:53.411 "impl_name": "uring" 00:28:53.411 } 00:28:53.411 }, 00:28:53.411 { 00:28:53.411 "method": "sock_impl_set_options", 00:28:53.411 "params": { 00:28:53.411 "impl_name": "ssl", 00:28:53.411 "recv_buf_size": 4096, 00:28:53.411 "send_buf_size": 4096, 00:28:53.411 "enable_recv_pipe": true, 00:28:53.411 "enable_quickack": false, 00:28:53.411 "enable_placement_id": 0, 00:28:53.411 "enable_zerocopy_send_server": true, 00:28:53.411 "enable_zerocopy_send_client": false, 00:28:53.411 "zerocopy_threshold": 0, 00:28:53.411 "tls_version": 0, 00:28:53.411 "enable_ktls": false 00:28:53.411 } 00:28:53.411 }, 00:28:53.411 { 00:28:53.411 "method": "sock_impl_set_options", 00:28:53.411 "params": { 00:28:53.411 "impl_name": "posix", 00:28:53.411 "recv_buf_size": 2097152, 00:28:53.411 "send_buf_size": 2097152, 00:28:53.411 "enable_recv_pipe": true, 00:28:53.411 "enable_quickack": false, 00:28:53.411 "enable_placement_id": 0, 00:28:53.411 "enable_zerocopy_send_server": true, 00:28:53.411 "enable_zerocopy_send_client": false, 00:28:53.411 "zerocopy_threshold": 0, 00:28:53.411 "tls_version": 0, 00:28:53.411 "enable_ktls": false 00:28:53.411 } 00:28:53.411 }, 00:28:53.411 { 00:28:53.411 "method": "sock_impl_set_options", 00:28:53.411 "params": { 00:28:53.411 "impl_name": "uring", 00:28:53.411 "recv_buf_size": 2097152, 00:28:53.411 "send_buf_size": 2097152, 00:28:53.411 "enable_recv_pipe": true, 00:28:53.411 "enable_quickack": false, 00:28:53.411 "enable_placement_id": 0, 00:28:53.411 "enable_zerocopy_send_server": false, 00:28:53.411 "enable_zerocopy_send_client": false, 00:28:53.411 "zerocopy_threshold": 0, 00:28:53.411 "tls_version": 0, 00:28:53.411 "enable_ktls": false 00:28:53.411 } 00:28:53.411 } 00:28:53.411 ] 00:28:53.411 }, 00:28:53.411 { 00:28:53.411 "subsystem": "vmd", 00:28:53.411 "config": [] 00:28:53.411 }, 00:28:53.411 { 00:28:53.411 "subsystem": "accel", 00:28:53.411 "config": [ 00:28:53.411 { 00:28:53.411 "method": "accel_set_options", 00:28:53.411 "params": { 00:28:53.411 "small_cache_size": 128, 00:28:53.411 "large_cache_size": 16, 00:28:53.411 "task_count": 2048, 00:28:53.411 "sequence_count": 2048, 00:28:53.411 "buf_count": 2048 00:28:53.411 } 00:28:53.411 } 00:28:53.411 ] 00:28:53.411 }, 00:28:53.411 { 00:28:53.411 "subsystem": "bdev", 00:28:53.411 "config": [ 00:28:53.411 { 00:28:53.411 "method": "bdev_set_options", 00:28:53.411 "params": { 00:28:53.411 "bdev_io_pool_size": 65535, 00:28:53.411 "bdev_io_cache_size": 256, 00:28:53.411 "bdev_auto_examine": true, 00:28:53.411 "iobuf_small_cache_size": 128, 00:28:53.411 "iobuf_large_cache_size": 16 00:28:53.411 } 00:28:53.411 }, 00:28:53.411 { 00:28:53.411 "method": "bdev_raid_set_options", 00:28:53.411 "params": { 00:28:53.411 "process_window_size_kb": 1024, 00:28:53.411 "process_max_bandwidth_mb_sec": 0 00:28:53.412 } 00:28:53.412 }, 00:28:53.412 { 00:28:53.412 "method": "bdev_iscsi_set_options", 00:28:53.412 "params": { 00:28:53.412 "timeout_sec": 30 00:28:53.412 } 00:28:53.412 }, 00:28:53.412 { 00:28:53.412 "method": "bdev_nvme_set_options", 00:28:53.412 "params": { 00:28:53.412 "action_on_timeout": "none", 00:28:53.412 "timeout_us": 0, 00:28:53.412 "timeout_admin_us": 0, 00:28:53.412 "keep_alive_timeout_ms": 10000, 00:28:53.412 "arbitration_burst": 0, 00:28:53.412 "low_priority_weight": 0, 00:28:53.412 "medium_priority_weight": 0, 00:28:53.412 "high_priority_weight": 0, 00:28:53.412 "nvme_adminq_poll_period_us": 10000, 00:28:53.412 "nvme_ioq_poll_period_us": 0, 00:28:53.412 "io_queue_requests": 512, 00:28:53.412 "delay_cmd_submit": true, 00:28:53.412 "transport_retry_count": 4, 00:28:53.412 "bdev_retry_count": 3, 00:28:53.412 "transport_ack_timeout": 0, 00:28:53.412 "ctrlr_loss_timeout_sec": 0, 00:28:53.412 "reconnect_delay_sec": 0, 00:28:53.412 "fast_io_fail_timeout_sec": 0, 00:28:53.412 "disable_auto_failback": false, 00:28:53.412 "generate_uuids": false, 00:28:53.412 "transport_tos": 0, 00:28:53.412 "nvme_error_stat": false, 00:28:53.412 "rdma_srq_size": 0, 00:28:53.412 "io_path_stat": false, 00:28:53.412 "allow_accel_sequence": false, 00:28:53.412 "rdma_max_cq_size": 0, 00:28:53.412 "rdma_cm_event_timeout_ms": 0, 00:28:53.412 "dhchap_digests": [ 00:28:53.412 "sha256", 00:28:53.412 "sha384", 00:28:53.412 "sha512" 00:28:53.412 ], 00:28:53.412 "dhchap_dhgroups": [ 00:28:53.412 "null", 00:28:53.412 "ffdhe2048", 00:28:53.412 "ffdhe3072", 00:28:53.412 "ffdhe4096", 00:28:53.412 "ffdhe6144", 00:28:53.412 "ffdhe8192" 00:28:53.412 ] 00:28:53.412 } 00:28:53.412 }, 00:28:53.412 { 00:28:53.412 "method": "bdev_nvme_attach_controller", 00:28:53.412 "params": { 00:28:53.412 "name": "nvme0", 00:28:53.412 "trtype": "TCP", 00:28:53.412 "adrfam": "IPv4", 00:28:53.412 "traddr": "127.0.0.1", 00:28:53.412 "trsvcid": "4420", 00:28:53.412 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:53.412 "prchk_reftag": false, 00:28:53.412 "prchk_guard": false, 00:28:53.412 "ctrlr_loss_timeout_sec": 0, 00:28:53.412 "reconnect_delay_sec": 0, 00:28:53.412 "fast_io_fail_timeout_sec": 0, 00:28:53.412 "psk": "key0", 00:28:53.412 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:53.412 "hdgst": false, 00:28:53.412 "ddgst": false 00:28:53.412 } 00:28:53.412 }, 00:28:53.412 { 00:28:53.412 "method": "bdev_nvme_set_hotplug", 00:28:53.412 "params": { 00:28:53.412 "period_us": 100000, 00:28:53.412 "enable": false 00:28:53.412 } 00:28:53.412 }, 00:28:53.412 { 00:28:53.412 "method": "bdev_wait_for_examine" 00:28:53.412 } 00:28:53.412 ] 00:28:53.412 }, 00:28:53.412 { 00:28:53.412 "subsystem": "nbd", 00:28:53.412 "config": [] 00:28:53.412 } 00:28:53.412 ] 00:28:53.412 }' 00:28:53.412 [2024-09-28 01:41:49.128085] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:28:53.412 [2024-09-28 01:41:49.128224] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91942 ] 00:28:53.412 [2024-09-28 01:41:49.280709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.671 [2024-09-28 01:41:49.431761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.929 [2024-09-28 01:41:49.657789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:53.929 [2024-09-28 01:41:49.758241] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:54.225 01:41:50 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:54.225 01:41:50 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:28:54.225 01:41:50 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:28:54.225 01:41:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:54.225 01:41:50 keyring_file -- keyring/file.sh@121 -- # jq length 00:28:54.484 01:41:50 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:54.484 01:41:50 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:28:54.484 01:41:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:54.484 01:41:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:54.484 01:41:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:54.484 01:41:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:54.484 01:41:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:54.743 01:41:50 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:28:54.743 01:41:50 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:28:54.743 01:41:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:54.743 01:41:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:54.743 01:41:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:54.743 01:41:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:54.743 01:41:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:55.001 01:41:50 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:28:55.001 01:41:50 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:28:55.001 01:41:50 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:28:55.001 01:41:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:55.260 01:41:51 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:28:55.260 01:41:51 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:55.260 01:41:51 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.aHojX0jd5T /tmp/tmp.e6WCE5iSZ9 00:28:55.260 01:41:51 keyring_file -- keyring/file.sh@20 -- # killprocess 91942 00:28:55.260 01:41:51 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 91942 ']' 00:28:55.260 01:41:51 keyring_file -- common/autotest_common.sh@954 -- # kill -0 91942 00:28:55.260 01:41:51 keyring_file -- common/autotest_common.sh@955 -- # uname 00:28:55.260 01:41:51 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:55.260 01:41:51 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91942 00:28:55.260 killing process with pid 91942 00:28:55.260 Received shutdown signal, test time was about 1.000000 seconds 00:28:55.260 00:28:55.260 Latency(us) 00:28:55.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.260 =================================================================================================================== 00:28:55.260 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:55.260 01:41:51 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:55.260 01:41:51 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:55.260 01:41:51 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91942' 00:28:55.260 01:41:51 keyring_file -- common/autotest_common.sh@969 -- # kill 91942 00:28:55.260 01:41:51 keyring_file -- common/autotest_common.sh@974 -- # wait 91942 00:28:56.196 01:41:52 keyring_file -- keyring/file.sh@21 -- # killprocess 91669 00:28:56.196 01:41:52 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 91669 ']' 00:28:56.196 01:41:52 keyring_file -- common/autotest_common.sh@954 -- # kill -0 91669 00:28:56.196 01:41:52 keyring_file -- common/autotest_common.sh@955 -- # uname 00:28:56.196 01:41:52 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:56.455 01:41:52 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91669 00:28:56.455 killing process with pid 91669 00:28:56.455 01:41:52 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:56.455 01:41:52 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:56.455 01:41:52 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91669' 00:28:56.455 01:41:52 keyring_file -- common/autotest_common.sh@969 -- # kill 91669 00:28:56.455 01:41:52 keyring_file -- common/autotest_common.sh@974 -- # wait 91669 00:28:58.361 ************************************ 00:28:58.361 END TEST keyring_file 00:28:58.361 ************************************ 00:28:58.361 00:28:58.361 real 0m18.681s 00:28:58.361 user 0m43.427s 00:28:58.361 sys 0m2.912s 00:28:58.361 01:41:53 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.361 01:41:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:58.361 01:41:53 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:28:58.361 01:41:53 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:58.361 01:41:53 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:58.361 01:41:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.361 01:41:53 -- common/autotest_common.sh@10 -- # set +x 00:28:58.361 ************************************ 00:28:58.361 START TEST keyring_linux 00:28:58.361 ************************************ 00:28:58.361 01:41:53 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:58.361 Joined session keyring: 363762990 00:28:58.361 * Looking for test storage... 00:28:58.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:58.361 01:41:54 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:58.361 01:41:54 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:28:58.361 01:41:54 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:58.361 01:41:54 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@345 -- # : 1 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.361 01:41:54 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:28:58.362 01:41:54 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.362 01:41:54 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:28:58.362 01:41:54 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:28:58.362 01:41:54 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.362 01:41:54 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:28:58.362 01:41:54 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.362 01:41:54 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.362 01:41:54 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.362 01:41:54 keyring_linux -- scripts/common.sh@368 -- # return 0 00:28:58.362 01:41:54 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.362 01:41:54 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:58.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.362 --rc genhtml_branch_coverage=1 00:28:58.362 --rc genhtml_function_coverage=1 00:28:58.362 --rc genhtml_legend=1 00:28:58.362 --rc geninfo_all_blocks=1 00:28:58.362 --rc geninfo_unexecuted_blocks=1 00:28:58.362 00:28:58.362 ' 00:28:58.362 01:41:54 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:58.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.362 --rc genhtml_branch_coverage=1 00:28:58.362 --rc genhtml_function_coverage=1 00:28:58.362 --rc genhtml_legend=1 00:28:58.362 --rc geninfo_all_blocks=1 00:28:58.362 --rc geninfo_unexecuted_blocks=1 00:28:58.362 00:28:58.362 ' 00:28:58.362 01:41:54 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:58.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.362 --rc genhtml_branch_coverage=1 00:28:58.362 --rc genhtml_function_coverage=1 00:28:58.362 --rc genhtml_legend=1 00:28:58.362 --rc geninfo_all_blocks=1 00:28:58.362 --rc geninfo_unexecuted_blocks=1 00:28:58.362 00:28:58.362 ' 00:28:58.362 01:41:54 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:58.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.362 --rc genhtml_branch_coverage=1 00:28:58.362 --rc genhtml_function_coverage=1 00:28:58.362 --rc genhtml_legend=1 00:28:58.362 --rc geninfo_all_blocks=1 00:28:58.362 --rc geninfo_unexecuted_blocks=1 00:28:58.362 00:28:58.362 ' 00:28:58.362 01:41:54 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:58.362 01:41:54 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f8eaa80b-beb5-4887-8952-726ced1ba196 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=f8eaa80b-beb5-4887-8952-726ced1ba196 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:58.362 01:41:54 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.362 01:41:54 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.362 01:41:54 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.362 01:41:54 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.362 01:41:54 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.362 01:41:54 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.362 01:41:54 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.362 01:41:54 keyring_linux -- paths/export.sh@5 -- # export PATH 00:28:58.362 01:41:54 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:58.362 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.362 01:41:54 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:58.362 01:41:54 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:58.362 01:41:54 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:58.362 01:41:54 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:28:58.362 01:41:54 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:28:58.362 01:41:54 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:28:58.362 01:41:54 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:28:58.362 01:41:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:58.362 01:41:54 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:28:58.362 01:41:54 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:58.362 01:41:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:58.362 01:41:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:28:58.362 01:41:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@729 -- # python - 00:28:58.362 01:41:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:28:58.362 /tmp/:spdk-test:key0 00:28:58.362 01:41:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:28:58.362 01:41:54 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:28:58.362 01:41:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:58.362 01:41:54 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:28:58.362 01:41:54 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:58.362 01:41:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:58.362 01:41:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:28:58.362 01:41:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:28:58.362 01:41:54 keyring_linux -- nvmf/common.sh@729 -- # python - 00:28:58.643 01:41:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:28:58.643 /tmp/:spdk-test:key1 00:28:58.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.643 01:41:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:28:58.643 01:41:54 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=92088 00:28:58.643 01:41:54 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 92088 00:28:58.643 01:41:54 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:58.643 01:41:54 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 92088 ']' 00:28:58.643 01:41:54 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.643 01:41:54 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:58.643 01:41:54 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.643 01:41:54 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:58.643 01:41:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:58.643 [2024-09-28 01:41:54.431295] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:28:58.643 [2024-09-28 01:41:54.431723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92088 ] 00:28:58.915 [2024-09-28 01:41:54.602902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.915 [2024-09-28 01:41:54.749624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.174 [2024-09-28 01:41:54.926437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:59.741 01:41:55 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:59.741 01:41:55 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:28:59.741 01:41:55 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:28:59.741 01:41:55 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.741 01:41:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:59.741 [2024-09-28 01:41:55.380053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.741 null0 00:28:59.741 [2024-09-28 01:41:55.412052] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:59.741 [2024-09-28 01:41:55.412488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:59.741 01:41:55 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.741 01:41:55 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:28:59.741 897632370 00:28:59.741 01:41:55 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:28:59.741 894296941 00:28:59.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:59.741 01:41:55 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=92105 00:28:59.741 01:41:55 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 92105 /var/tmp/bperf.sock 00:28:59.741 01:41:55 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:28:59.741 01:41:55 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 92105 ']' 00:28:59.741 01:41:55 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:59.741 01:41:55 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:59.741 01:41:55 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:59.741 01:41:55 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:59.741 01:41:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:59.741 [2024-09-28 01:41:55.546988] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:28:59.741 [2024-09-28 01:41:55.547145] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92105 ] 00:29:00.009 [2024-09-28 01:41:55.717131] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.009 [2024-09-28 01:41:55.867900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.576 01:41:56 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:00.576 01:41:56 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:29:00.576 01:41:56 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:00.576 01:41:56 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:00.834 01:41:56 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:00.834 01:41:56 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:01.401 [2024-09-28 01:41:57.036251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:01.401 01:41:57 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:01.401 01:41:57 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:01.401 [2024-09-28 01:41:57.330487] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:01.660 nvme0n1 00:29:01.660 01:41:57 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:01.660 01:41:57 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:01.660 01:41:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:01.660 01:41:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:01.660 01:41:57 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:01.660 01:41:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:01.919 01:41:57 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:01.919 01:41:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:01.919 01:41:57 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:01.919 01:41:57 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:01.919 01:41:57 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:01.919 01:41:57 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:01.919 01:41:57 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:02.177 01:41:57 keyring_linux -- keyring/linux.sh@25 -- # sn=897632370 00:29:02.177 01:41:57 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:02.178 01:41:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:02.178 01:41:57 keyring_linux -- keyring/linux.sh@26 -- # [[ 897632370 == \8\9\7\6\3\2\3\7\0 ]] 00:29:02.178 01:41:57 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 897632370 00:29:02.178 01:41:57 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:02.178 01:41:57 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:02.178 Running I/O for 1 seconds... 00:29:03.552 9204.00 IOPS, 35.95 MiB/s 00:29:03.552 Latency(us) 00:29:03.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.552 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:03.552 nvme0n1 : 1.01 9206.50 35.96 0.00 0.00 13801.21 4259.84 17635.14 00:29:03.552 =================================================================================================================== 00:29:03.552 Total : 9206.50 35.96 0.00 0.00 13801.21 4259.84 17635.14 00:29:03.552 { 00:29:03.552 "results": [ 00:29:03.552 { 00:29:03.552 "job": "nvme0n1", 00:29:03.552 "core_mask": "0x2", 00:29:03.552 "workload": "randread", 00:29:03.552 "status": "finished", 00:29:03.552 "queue_depth": 128, 00:29:03.552 "io_size": 4096, 00:29:03.552 "runtime": 1.01374, 00:29:03.552 "iops": 9206.502653540356, 00:29:03.552 "mibps": 35.962900990392015, 00:29:03.552 "io_failed": 0, 00:29:03.552 "io_timeout": 0, 00:29:03.552 "avg_latency_us": 13801.21113000789, 00:29:03.552 "min_latency_us": 4259.84, 00:29:03.552 "max_latency_us": 17635.14181818182 00:29:03.552 } 00:29:03.552 ], 00:29:03.553 "core_count": 1 00:29:03.553 } 00:29:03.553 01:41:59 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:03.553 01:41:59 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:03.553 01:41:59 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:03.553 01:41:59 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:03.553 01:41:59 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:03.553 01:41:59 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:03.553 01:41:59 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:03.553 01:41:59 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.811 01:41:59 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:03.811 01:41:59 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:03.811 01:41:59 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:03.812 01:41:59 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:03.812 01:41:59 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:29:03.812 01:41:59 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:03.812 01:41:59 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:03.812 01:41:59 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.812 01:41:59 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:03.812 01:41:59 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.812 01:41:59 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:03.812 01:41:59 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:04.071 [2024-09-28 01:41:59.898312] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:04.071 [2024-09-28 01:41:59.898400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:29:04.071 [2024-09-28 01:41:59.899360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:29:04.071 [2024-09-28 01:41:59.900352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:04.071 [2024-09-28 01:41:59.900385] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:04.071 [2024-09-28 01:41:59.900400] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:29:04.071 [2024-09-28 01:41:59.900416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:04.071 request: 00:29:04.071 { 00:29:04.071 "name": "nvme0", 00:29:04.071 "trtype": "tcp", 00:29:04.071 "traddr": "127.0.0.1", 00:29:04.071 "adrfam": "ipv4", 00:29:04.071 "trsvcid": "4420", 00:29:04.071 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:04.071 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:04.071 "prchk_reftag": false, 00:29:04.071 "prchk_guard": false, 00:29:04.071 "hdgst": false, 00:29:04.071 "ddgst": false, 00:29:04.071 "psk": ":spdk-test:key1", 00:29:04.071 "allow_unrecognized_csi": false, 00:29:04.071 "method": "bdev_nvme_attach_controller", 00:29:04.071 "req_id": 1 00:29:04.071 } 00:29:04.071 Got JSON-RPC error response 00:29:04.071 response: 00:29:04.071 { 00:29:04.071 "code": -5, 00:29:04.071 "message": "Input/output error" 00:29:04.071 } 00:29:04.071 01:41:59 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:29:04.071 01:41:59 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:04.071 01:41:59 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:04.071 01:41:59 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:04.071 01:41:59 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:04.072 01:41:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:04.072 01:41:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:04.072 01:41:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:04.072 01:41:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:04.072 01:41:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:04.072 01:41:59 keyring_linux -- keyring/linux.sh@33 -- # sn=897632370 00:29:04.072 01:41:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 897632370 00:29:04.072 1 links removed 00:29:04.072 01:41:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:04.072 01:41:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:04.072 01:41:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:04.072 01:41:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:04.072 01:41:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:04.072 01:41:59 keyring_linux -- keyring/linux.sh@33 -- # sn=894296941 00:29:04.072 01:41:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 894296941 00:29:04.072 1 links removed 00:29:04.072 01:41:59 keyring_linux -- keyring/linux.sh@41 -- # killprocess 92105 00:29:04.072 01:41:59 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 92105 ']' 00:29:04.072 01:41:59 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 92105 00:29:04.072 01:41:59 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:29:04.072 01:41:59 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:04.072 01:41:59 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92105 00:29:04.072 killing process with pid 92105 00:29:04.072 Received shutdown signal, test time was about 1.000000 seconds 00:29:04.072 00:29:04.072 Latency(us) 00:29:04.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.072 =================================================================================================================== 00:29:04.072 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:04.072 01:41:59 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:04.072 01:41:59 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:04.072 01:41:59 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92105' 00:29:04.072 01:41:59 keyring_linux -- common/autotest_common.sh@969 -- # kill 92105 00:29:04.072 01:41:59 keyring_linux -- common/autotest_common.sh@974 -- # wait 92105 00:29:05.007 01:42:00 keyring_linux -- keyring/linux.sh@42 -- # killprocess 92088 00:29:05.007 01:42:00 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 92088 ']' 00:29:05.007 01:42:00 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 92088 00:29:05.007 01:42:00 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:29:05.007 01:42:00 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:05.007 01:42:00 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92088 00:29:05.007 killing process with pid 92088 00:29:05.007 01:42:00 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:05.007 01:42:00 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:05.007 01:42:00 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92088' 00:29:05.007 01:42:00 keyring_linux -- common/autotest_common.sh@969 -- # kill 92088 00:29:05.008 01:42:00 keyring_linux -- common/autotest_common.sh@974 -- # wait 92088 00:29:06.910 00:29:06.910 real 0m8.787s 00:29:06.910 user 0m15.614s 00:29:06.910 sys 0m1.552s 00:29:06.910 01:42:02 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:06.910 01:42:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:06.910 ************************************ 00:29:06.910 END TEST keyring_linux 00:29:06.910 ************************************ 00:29:06.910 01:42:02 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:29:06.910 01:42:02 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:06.910 01:42:02 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:06.910 01:42:02 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:06.910 01:42:02 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:29:06.910 01:42:02 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:29:06.910 01:42:02 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:06.910 01:42:02 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:06.910 01:42:02 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:06.910 01:42:02 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:29:06.910 01:42:02 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:06.910 01:42:02 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:29:06.910 01:42:02 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:06.910 01:42:02 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:06.910 01:42:02 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:29:06.910 01:42:02 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:29:06.910 01:42:02 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:29:06.910 01:42:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:06.910 01:42:02 -- common/autotest_common.sh@10 -- # set +x 00:29:06.910 01:42:02 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:29:06.910 01:42:02 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:06.910 01:42:02 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:06.910 01:42:02 -- common/autotest_common.sh@10 -- # set +x 00:29:08.814 INFO: APP EXITING 00:29:08.814 INFO: killing all VMs 00:29:08.814 INFO: killing vhost app 00:29:08.814 INFO: EXIT DONE 00:29:09.382 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:09.641 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:09.641 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:10.208 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:10.208 Cleaning 00:29:10.208 Removing: /var/run/dpdk/spdk0/config 00:29:10.208 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:10.208 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:10.208 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:10.208 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:10.208 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:10.208 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:10.208 Removing: /var/run/dpdk/spdk1/config 00:29:10.208 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:10.208 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:10.208 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:10.208 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:10.208 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:10.208 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:10.208 Removing: /var/run/dpdk/spdk2/config 00:29:10.208 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:10.208 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:10.466 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:10.466 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:10.466 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:10.466 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:10.466 Removing: /var/run/dpdk/spdk3/config 00:29:10.466 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:10.466 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:10.466 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:10.466 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:10.466 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:10.466 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:10.466 Removing: /var/run/dpdk/spdk4/config 00:29:10.466 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:10.466 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:10.466 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:10.466 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:10.466 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:10.466 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:10.466 Removing: /dev/shm/nvmf_trace.0 00:29:10.466 Removing: /dev/shm/spdk_tgt_trace.pid57409 00:29:10.466 Removing: /var/run/dpdk/spdk0 00:29:10.466 Removing: /var/run/dpdk/spdk1 00:29:10.466 Removing: /var/run/dpdk/spdk2 00:29:10.466 Removing: /var/run/dpdk/spdk3 00:29:10.466 Removing: /var/run/dpdk/spdk4 00:29:10.466 Removing: /var/run/dpdk/spdk_pid57196 00:29:10.467 Removing: /var/run/dpdk/spdk_pid57409 00:29:10.467 Removing: /var/run/dpdk/spdk_pid57633 00:29:10.467 Removing: /var/run/dpdk/spdk_pid57731 00:29:10.467 Removing: /var/run/dpdk/spdk_pid57782 00:29:10.467 Removing: /var/run/dpdk/spdk_pid57910 00:29:10.467 Removing: /var/run/dpdk/spdk_pid57928 00:29:10.467 Removing: /var/run/dpdk/spdk_pid58087 00:29:10.467 Removing: /var/run/dpdk/spdk_pid58297 00:29:10.467 Removing: /var/run/dpdk/spdk_pid58456 00:29:10.467 Removing: /var/run/dpdk/spdk_pid58561 00:29:10.467 Removing: /var/run/dpdk/spdk_pid58667 00:29:10.467 Removing: /var/run/dpdk/spdk_pid58779 00:29:10.467 Removing: /var/run/dpdk/spdk_pid58882 00:29:10.467 Removing: /var/run/dpdk/spdk_pid58921 00:29:10.467 Removing: /var/run/dpdk/spdk_pid58963 00:29:10.467 Removing: /var/run/dpdk/spdk_pid59039 00:29:10.467 Removing: /var/run/dpdk/spdk_pid59162 00:29:10.467 Removing: /var/run/dpdk/spdk_pid59632 00:29:10.467 Removing: /var/run/dpdk/spdk_pid59696 00:29:10.467 Removing: /var/run/dpdk/spdk_pid59763 00:29:10.467 Removing: /var/run/dpdk/spdk_pid59788 00:29:10.467 Removing: /var/run/dpdk/spdk_pid59914 00:29:10.467 Removing: /var/run/dpdk/spdk_pid59930 00:29:10.467 Removing: /var/run/dpdk/spdk_pid60061 00:29:10.467 Removing: /var/run/dpdk/spdk_pid60077 00:29:10.467 Removing: /var/run/dpdk/spdk_pid60146 00:29:10.467 Removing: /var/run/dpdk/spdk_pid60164 00:29:10.467 Removing: /var/run/dpdk/spdk_pid60223 00:29:10.467 Removing: /var/run/dpdk/spdk_pid60246 00:29:10.467 Removing: /var/run/dpdk/spdk_pid60434 00:29:10.467 Removing: /var/run/dpdk/spdk_pid60476 00:29:10.467 Removing: /var/run/dpdk/spdk_pid60565 00:29:10.467 Removing: /var/run/dpdk/spdk_pid60922 00:29:10.467 Removing: /var/run/dpdk/spdk_pid60935 00:29:10.467 Removing: /var/run/dpdk/spdk_pid60979 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61013 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61041 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61077 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61097 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61130 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61161 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61187 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61214 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61245 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61271 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61304 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61334 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61355 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61388 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61419 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61441 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61472 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61520 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61546 00:29:10.467 Removing: /var/run/dpdk/spdk_pid61587 00:29:10.725 Removing: /var/run/dpdk/spdk_pid61671 00:29:10.725 Removing: /var/run/dpdk/spdk_pid61717 00:29:10.725 Removing: /var/run/dpdk/spdk_pid61739 00:29:10.725 Removing: /var/run/dpdk/spdk_pid61779 00:29:10.725 Removing: /var/run/dpdk/spdk_pid61801 00:29:10.725 Removing: /var/run/dpdk/spdk_pid61826 00:29:10.725 Removing: /var/run/dpdk/spdk_pid61880 00:29:10.725 Removing: /var/run/dpdk/spdk_pid61906 00:29:10.725 Removing: /var/run/dpdk/spdk_pid61952 00:29:10.725 Removing: /var/run/dpdk/spdk_pid61968 00:29:10.725 Removing: /var/run/dpdk/spdk_pid61996 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62017 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62039 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62066 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62082 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62109 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62150 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62188 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62215 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62256 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62279 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62302 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62355 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62384 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62417 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62442 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62467 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62481 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62506 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62530 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62545 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62570 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62659 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62746 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62898 00:29:10.725 Removing: /var/run/dpdk/spdk_pid62951 00:29:10.725 Removing: /var/run/dpdk/spdk_pid63006 00:29:10.725 Removing: /var/run/dpdk/spdk_pid63038 00:29:10.725 Removing: /var/run/dpdk/spdk_pid63067 00:29:10.725 Removing: /var/run/dpdk/spdk_pid63093 00:29:10.725 Removing: /var/run/dpdk/spdk_pid63143 00:29:10.725 Removing: /var/run/dpdk/spdk_pid63171 00:29:10.725 Removing: /var/run/dpdk/spdk_pid63261 00:29:10.725 Removing: /var/run/dpdk/spdk_pid63311 00:29:10.725 Removing: /var/run/dpdk/spdk_pid63389 00:29:10.725 Removing: /var/run/dpdk/spdk_pid63497 00:29:10.725 Removing: /var/run/dpdk/spdk_pid63582 00:29:10.725 Removing: /var/run/dpdk/spdk_pid63634 00:29:10.725 Removing: /var/run/dpdk/spdk_pid63751 00:29:10.725 Removing: /var/run/dpdk/spdk_pid63811 00:29:10.725 Removing: /var/run/dpdk/spdk_pid63861 00:29:10.725 Removing: /var/run/dpdk/spdk_pid64117 00:29:10.725 Removing: /var/run/dpdk/spdk_pid64235 00:29:10.725 Removing: /var/run/dpdk/spdk_pid64270 00:29:10.725 Removing: /var/run/dpdk/spdk_pid64306 00:29:10.725 Removing: /var/run/dpdk/spdk_pid64359 00:29:10.725 Removing: /var/run/dpdk/spdk_pid64410 00:29:10.725 Removing: /var/run/dpdk/spdk_pid64456 00:29:10.725 Removing: /var/run/dpdk/spdk_pid64499 00:29:10.725 Removing: /var/run/dpdk/spdk_pid64915 00:29:10.725 Removing: /var/run/dpdk/spdk_pid64954 00:29:10.725 Removing: /var/run/dpdk/spdk_pid65334 00:29:10.725 Removing: /var/run/dpdk/spdk_pid65813 00:29:10.725 Removing: /var/run/dpdk/spdk_pid66100 00:29:10.725 Removing: /var/run/dpdk/spdk_pid67035 00:29:10.725 Removing: /var/run/dpdk/spdk_pid67999 00:29:10.725 Removing: /var/run/dpdk/spdk_pid68134 00:29:10.725 Removing: /var/run/dpdk/spdk_pid68214 00:29:10.725 Removing: /var/run/dpdk/spdk_pid69677 00:29:10.725 Removing: /var/run/dpdk/spdk_pid70052 00:29:10.725 Removing: /var/run/dpdk/spdk_pid73811 00:29:10.725 Removing: /var/run/dpdk/spdk_pid74225 00:29:10.725 Removing: /var/run/dpdk/spdk_pid74338 00:29:10.725 Removing: /var/run/dpdk/spdk_pid74484 00:29:10.725 Removing: /var/run/dpdk/spdk_pid74520 00:29:10.725 Removing: /var/run/dpdk/spdk_pid74561 00:29:10.725 Removing: /var/run/dpdk/spdk_pid74607 00:29:10.725 Removing: /var/run/dpdk/spdk_pid74730 00:29:10.725 Removing: /var/run/dpdk/spdk_pid74873 00:29:10.984 Removing: /var/run/dpdk/spdk_pid75075 00:29:10.984 Removing: /var/run/dpdk/spdk_pid75176 00:29:10.984 Removing: /var/run/dpdk/spdk_pid75388 00:29:10.984 Removing: /var/run/dpdk/spdk_pid75491 00:29:10.984 Removing: /var/run/dpdk/spdk_pid75597 00:29:10.984 Removing: /var/run/dpdk/spdk_pid75983 00:29:10.984 Removing: /var/run/dpdk/spdk_pid76416 00:29:10.984 Removing: /var/run/dpdk/spdk_pid76417 00:29:10.984 Removing: /var/run/dpdk/spdk_pid76418 00:29:10.984 Removing: /var/run/dpdk/spdk_pid76700 00:29:10.984 Removing: /var/run/dpdk/spdk_pid76989 00:29:10.984 Removing: /var/run/dpdk/spdk_pid77003 00:29:10.984 Removing: /var/run/dpdk/spdk_pid79409 00:29:10.984 Removing: /var/run/dpdk/spdk_pid79412 00:29:10.984 Removing: /var/run/dpdk/spdk_pid79756 00:29:10.984 Removing: /var/run/dpdk/spdk_pid79771 00:29:10.984 Removing: /var/run/dpdk/spdk_pid79790 00:29:10.984 Removing: /var/run/dpdk/spdk_pid79824 00:29:10.984 Removing: /var/run/dpdk/spdk_pid79836 00:29:10.984 Removing: /var/run/dpdk/spdk_pid79920 00:29:10.984 Removing: /var/run/dpdk/spdk_pid79929 00:29:10.984 Removing: /var/run/dpdk/spdk_pid80038 00:29:10.984 Removing: /var/run/dpdk/spdk_pid80041 00:29:10.984 Removing: /var/run/dpdk/spdk_pid80152 00:29:10.984 Removing: /var/run/dpdk/spdk_pid80162 00:29:10.984 Removing: /var/run/dpdk/spdk_pid80606 00:29:10.984 Removing: /var/run/dpdk/spdk_pid80648 00:29:10.984 Removing: /var/run/dpdk/spdk_pid80745 00:29:10.984 Removing: /var/run/dpdk/spdk_pid80822 00:29:10.984 Removing: /var/run/dpdk/spdk_pid81198 00:29:10.984 Removing: /var/run/dpdk/spdk_pid81402 00:29:10.984 Removing: /var/run/dpdk/spdk_pid81850 00:29:10.984 Removing: /var/run/dpdk/spdk_pid82417 00:29:10.984 Removing: /var/run/dpdk/spdk_pid83293 00:29:10.984 Removing: /var/run/dpdk/spdk_pid83950 00:29:10.984 Removing: /var/run/dpdk/spdk_pid83959 00:29:10.984 Removing: /var/run/dpdk/spdk_pid86017 00:29:10.984 Removing: /var/run/dpdk/spdk_pid86085 00:29:10.984 Removing: /var/run/dpdk/spdk_pid86154 00:29:10.984 Removing: /var/run/dpdk/spdk_pid86225 00:29:10.984 Removing: /var/run/dpdk/spdk_pid86365 00:29:10.984 Removing: /var/run/dpdk/spdk_pid86432 00:29:10.984 Removing: /var/run/dpdk/spdk_pid86493 00:29:10.984 Removing: /var/run/dpdk/spdk_pid86557 00:29:10.984 Removing: /var/run/dpdk/spdk_pid86956 00:29:10.984 Removing: /var/run/dpdk/spdk_pid88173 00:29:10.984 Removing: /var/run/dpdk/spdk_pid88324 00:29:10.984 Removing: /var/run/dpdk/spdk_pid88576 00:29:10.984 Removing: /var/run/dpdk/spdk_pid89190 00:29:10.984 Removing: /var/run/dpdk/spdk_pid89350 00:29:10.984 Removing: /var/run/dpdk/spdk_pid89512 00:29:10.984 Removing: /var/run/dpdk/spdk_pid89608 00:29:10.984 Removing: /var/run/dpdk/spdk_pid89771 00:29:10.984 Removing: /var/run/dpdk/spdk_pid89880 00:29:10.984 Removing: /var/run/dpdk/spdk_pid90602 00:29:10.984 Removing: /var/run/dpdk/spdk_pid90640 00:29:10.984 Removing: /var/run/dpdk/spdk_pid90676 00:29:10.984 Removing: /var/run/dpdk/spdk_pid91136 00:29:10.984 Removing: /var/run/dpdk/spdk_pid91171 00:29:10.984 Removing: /var/run/dpdk/spdk_pid91203 00:29:10.984 Removing: /var/run/dpdk/spdk_pid91669 00:29:10.984 Removing: /var/run/dpdk/spdk_pid91685 00:29:10.984 Removing: /var/run/dpdk/spdk_pid91942 00:29:10.984 Removing: /var/run/dpdk/spdk_pid92088 00:29:10.984 Removing: /var/run/dpdk/spdk_pid92105 00:29:10.984 Clean 00:29:11.243 01:42:06 -- common/autotest_common.sh@1451 -- # return 0 00:29:11.243 01:42:06 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:29:11.243 01:42:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:11.243 01:42:06 -- common/autotest_common.sh@10 -- # set +x 00:29:11.243 01:42:06 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:29:11.243 01:42:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:11.243 01:42:06 -- common/autotest_common.sh@10 -- # set +x 00:29:11.243 01:42:07 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:11.243 01:42:07 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:11.243 01:42:07 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:11.243 01:42:07 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:29:11.243 01:42:07 -- spdk/autotest.sh@394 -- # hostname 00:29:11.243 01:42:07 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:11.500 geninfo: WARNING: invalid characters removed from testname! 00:29:38.042 01:42:29 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:38.042 01:42:33 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:39.946 01:42:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:42.481 01:42:38 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:45.066 01:42:40 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:47.601 01:42:42 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:50.135 01:42:45 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:50.135 01:42:45 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:29:50.135 01:42:45 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:29:50.135 01:42:45 -- common/autotest_common.sh@1681 -- $ lcov --version 00:29:50.135 01:42:45 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:29:50.135 01:42:45 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:29:50.135 01:42:45 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:29:50.135 01:42:45 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:29:50.135 01:42:45 -- scripts/common.sh@336 -- $ IFS=.-: 00:29:50.135 01:42:45 -- scripts/common.sh@336 -- $ read -ra ver1 00:29:50.135 01:42:45 -- scripts/common.sh@337 -- $ IFS=.-: 00:29:50.135 01:42:45 -- scripts/common.sh@337 -- $ read -ra ver2 00:29:50.135 01:42:45 -- scripts/common.sh@338 -- $ local 'op=<' 00:29:50.135 01:42:45 -- scripts/common.sh@340 -- $ ver1_l=2 00:29:50.135 01:42:45 -- scripts/common.sh@341 -- $ ver2_l=1 00:29:50.135 01:42:45 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:29:50.135 01:42:45 -- scripts/common.sh@344 -- $ case "$op" in 00:29:50.135 01:42:45 -- scripts/common.sh@345 -- $ : 1 00:29:50.135 01:42:45 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:29:50.135 01:42:45 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.135 01:42:45 -- scripts/common.sh@365 -- $ decimal 1 00:29:50.135 01:42:45 -- scripts/common.sh@353 -- $ local d=1 00:29:50.135 01:42:45 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:29:50.135 01:42:45 -- scripts/common.sh@355 -- $ echo 1 00:29:50.135 01:42:45 -- scripts/common.sh@365 -- $ ver1[v]=1 00:29:50.135 01:42:45 -- scripts/common.sh@366 -- $ decimal 2 00:29:50.135 01:42:45 -- scripts/common.sh@353 -- $ local d=2 00:29:50.135 01:42:45 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:29:50.135 01:42:45 -- scripts/common.sh@355 -- $ echo 2 00:29:50.135 01:42:45 -- scripts/common.sh@366 -- $ ver2[v]=2 00:29:50.135 01:42:45 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:29:50.135 01:42:45 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:29:50.135 01:42:45 -- scripts/common.sh@368 -- $ return 0 00:29:50.135 01:42:45 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.135 01:42:45 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:29:50.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.135 --rc genhtml_branch_coverage=1 00:29:50.135 --rc genhtml_function_coverage=1 00:29:50.135 --rc genhtml_legend=1 00:29:50.135 --rc geninfo_all_blocks=1 00:29:50.135 --rc geninfo_unexecuted_blocks=1 00:29:50.135 00:29:50.135 ' 00:29:50.135 01:42:45 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:29:50.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.135 --rc genhtml_branch_coverage=1 00:29:50.135 --rc genhtml_function_coverage=1 00:29:50.135 --rc genhtml_legend=1 00:29:50.135 --rc geninfo_all_blocks=1 00:29:50.135 --rc geninfo_unexecuted_blocks=1 00:29:50.135 00:29:50.135 ' 00:29:50.135 01:42:45 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:29:50.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.135 --rc genhtml_branch_coverage=1 00:29:50.135 --rc genhtml_function_coverage=1 00:29:50.135 --rc genhtml_legend=1 00:29:50.135 --rc geninfo_all_blocks=1 00:29:50.135 --rc geninfo_unexecuted_blocks=1 00:29:50.135 00:29:50.135 ' 00:29:50.135 01:42:45 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:29:50.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.135 --rc genhtml_branch_coverage=1 00:29:50.135 --rc genhtml_function_coverage=1 00:29:50.135 --rc genhtml_legend=1 00:29:50.135 --rc geninfo_all_blocks=1 00:29:50.135 --rc geninfo_unexecuted_blocks=1 00:29:50.135 00:29:50.135 ' 00:29:50.135 01:42:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:50.135 01:42:45 -- scripts/common.sh@15 -- $ shopt -s extglob 00:29:50.135 01:42:45 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:50.135 01:42:45 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.135 01:42:45 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.135 01:42:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.136 01:42:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.136 01:42:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.136 01:42:45 -- paths/export.sh@5 -- $ export PATH 00:29:50.136 01:42:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.136 01:42:45 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:50.136 01:42:45 -- common/autobuild_common.sh@479 -- $ date +%s 00:29:50.136 01:42:45 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727487765.XXXXXX 00:29:50.136 01:42:45 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727487765.6lVe6A 00:29:50.136 01:42:45 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:29:50.136 01:42:45 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:29:50.136 01:42:45 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:29:50.136 01:42:45 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:50.136 01:42:45 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:50.136 01:42:45 -- common/autobuild_common.sh@495 -- $ get_config_params 00:29:50.136 01:42:45 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:29:50.136 01:42:45 -- common/autotest_common.sh@10 -- $ set +x 00:29:50.136 01:42:45 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:29:50.136 01:42:45 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:29:50.136 01:42:45 -- pm/common@17 -- $ local monitor 00:29:50.136 01:42:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:50.136 01:42:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:50.136 01:42:45 -- pm/common@25 -- $ sleep 1 00:29:50.136 01:42:45 -- pm/common@21 -- $ date +%s 00:29:50.136 01:42:45 -- pm/common@21 -- $ date +%s 00:29:50.136 01:42:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727487765 00:29:50.136 01:42:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727487765 00:29:50.136 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727487765_collect-cpu-load.pm.log 00:29:50.136 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727487765_collect-vmstat.pm.log 00:29:50.703 01:42:46 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:29:50.703 01:42:46 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:29:50.703 01:42:46 -- spdk/autopackage.sh@14 -- $ timing_finish 00:29:50.703 01:42:46 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:50.703 01:42:46 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:50.703 01:42:46 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:50.962 01:42:46 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:50.962 01:42:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:50.962 01:42:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:50.962 01:42:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:50.962 01:42:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:50.962 01:42:46 -- pm/common@44 -- $ pid=93896 00:29:50.962 01:42:46 -- pm/common@50 -- $ kill -TERM 93896 00:29:50.962 01:42:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:50.962 01:42:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:50.962 01:42:46 -- pm/common@44 -- $ pid=93898 00:29:50.962 01:42:46 -- pm/common@50 -- $ kill -TERM 93898 00:29:50.962 + [[ -n 5257 ]] 00:29:50.962 + sudo kill 5257 00:29:50.973 [Pipeline] } 00:29:50.988 [Pipeline] // timeout 00:29:50.994 [Pipeline] } 00:29:51.007 [Pipeline] // stage 00:29:51.013 [Pipeline] } 00:29:51.027 [Pipeline] // catchError 00:29:51.036 [Pipeline] stage 00:29:51.038 [Pipeline] { (Stop VM) 00:29:51.051 [Pipeline] sh 00:29:51.332 + vagrant halt 00:29:53.868 ==> default: Halting domain... 00:30:00.448 [Pipeline] sh 00:30:00.728 + vagrant destroy -f 00:30:03.262 ==> default: Removing domain... 00:30:03.534 [Pipeline] sh 00:30:03.816 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:30:03.826 [Pipeline] } 00:30:03.840 [Pipeline] // stage 00:30:03.845 [Pipeline] } 00:30:03.860 [Pipeline] // dir 00:30:03.865 [Pipeline] } 00:30:03.880 [Pipeline] // wrap 00:30:03.886 [Pipeline] } 00:30:03.899 [Pipeline] // catchError 00:30:03.908 [Pipeline] stage 00:30:03.910 [Pipeline] { (Epilogue) 00:30:03.923 [Pipeline] sh 00:30:04.206 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:09.521 [Pipeline] catchError 00:30:09.523 [Pipeline] { 00:30:09.535 [Pipeline] sh 00:30:09.818 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:10.080 Artifacts sizes are good 00:30:10.104 [Pipeline] } 00:30:10.113 [Pipeline] // catchError 00:30:10.122 [Pipeline] archiveArtifacts 00:30:10.127 Archiving artifacts 00:30:10.293 [Pipeline] cleanWs 00:30:10.302 [WS-CLEANUP] Deleting project workspace... 00:30:10.302 [WS-CLEANUP] Deferred wipeout is used... 00:30:10.307 [WS-CLEANUP] done 00:30:10.309 [Pipeline] } 00:30:10.320 [Pipeline] // stage 00:30:10.323 [Pipeline] } 00:30:10.335 [Pipeline] // node 00:30:10.339 [Pipeline] End of Pipeline 00:30:10.375 Finished: SUCCESS